A dimension-wise analysis method for the structural-acoustic system with interval parameters
NASA Astrophysics Data System (ADS)
Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong
2017-04-01
The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Pashiardis, Stelios
2007-02-01
SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
[Heart rate variability study based on a novel RdR RR Intervals Scatter Plot].
Lu, Hongwei; Lu, Xiuyun; Wang, Chunfang; Hua, Youyuan; Tian, Jiajia; Liu, Shihai
2014-08-01
On the basis of Poincare scatter plot and first order difference scatter plot, a novel heart rate variability (HRV) analysis method based on scatter plots of RR intervals and first order difference of RR intervals (namely, RdR) was proposed. The abscissa of the RdR scatter plot, the x-axis, is RR intervals and the ordinate, y-axis, is the difference between successive RR intervals. The RdR scatter plot includes the information of RR intervals and the difference between successive RR intervals, which captures more HRV information. By RdR scatter plot analysis of some records of MIT-BIH arrhythmias database, we found that the scatter plot of uncoupled premature ventricular contraction (PVC), coupled ventricular bigeminy and ventricular trigeminy PVC had specific graphic characteristics. The RdR scatter plot method has higher detecting performance than the Poincare scatter plot method, and simpler and more intuitive than the first order difference method.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
NASA Astrophysics Data System (ADS)
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Estimating short-run and long-run interaction mechanisms in interictal state.
Ozkaya, Ata; Korürek, Mehmet
2010-04-01
We address the issue of analyzing electroencephalogram (EEG) from seizure patients in order to test, model and determine the statistical properties that distinguish between EEG states (interictal, pre-ictal, ictal) by introducing a new class of time series analysis methods. In the present study: firstly, we employ statistical methods to determine the non-stationary behavior of focal interictal epileptiform series within very short time intervals; secondly, for such intervals that are deemed non-stationary we suggest the concept of Autoregressive Integrated Moving Average (ARIMA) process modelling, well known in time series analysis. We finally address the queries of causal relationships between epileptic states and between brain areas during epileptiform activity. We estimate the interaction between different EEG series (channels) in short time intervals by performing Granger-causality analysis and also estimate such interaction in long time intervals by employing Cointegration analysis, both analysis methods are well-known in econometrics. Here we find: first, that the causal relationship between neuronal assemblies can be identified according to the duration and the direction of their possible mutual influences; second, that although the estimated bidirectional causality in short time intervals yields that the neuronal ensembles positively affect each other, in long time intervals neither of them is affected (increasing amplitudes) from this relationship. Moreover, Cointegration analysis of the EEG series enables us to identify whether there is a causal link from the interictal state to ictal state.
Sensitivity Analysis of Multicriteria Choice to Changes in Intervals of Value Tradeoffs
NASA Astrophysics Data System (ADS)
Podinovski, V. V.
2018-03-01
An approach to sensitivity (stability) analysis of nondominated alternatives to changes in the bounds of intervals of value tradeoffs, where the alternatives are selected based on interval data of criteria tradeoffs is proposed. Methods of computations for the analysis of sensitivity of individual nondominated alternatives and the set of such alternatives as a whole are developed.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
NASA Astrophysics Data System (ADS)
Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun
2018-07-01
A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.
The method of trend analysis of parameters time series of gas-turbine engine state
NASA Astrophysics Data System (ADS)
Hvozdeva, I.; Myrhorod, V.; Derenh, Y.
2017-10-01
This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
Parameter identification for structural dynamics based on interval analysis algorithm
NASA Astrophysics Data System (ADS)
Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke
2018-04-01
A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.
Robotic fish tracking method based on suboptimal interval Kalman filter
NASA Astrophysics Data System (ADS)
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang
2013-01-01
Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.
Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C
2016-08-01
Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin
2013-01-01
Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509
Harari, Gil
2014-01-01
Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.
Coley-Grant, Deon; Herbert, Mike; Cornes, Michael P; Barlow, Ian M; Ford, Clare; Gama, Rousseau
2016-01-01
We studied the impact on reference intervals, classification of patients with hypoalbuminaemia and albumin infusion prescriptions on changing from a bromocresol green (BCG) to a bromocresol purple (BCP) serum albumin assay. Passing-Bablok regression analysis and Bland-Altman plot were used to compare Abbott BCP and Roche BCG methods. Linear regression analysis was used to compare in-house and an external laboratory Abbott BCP serum albumin results. Reference intervals for Abbott BCP serum albumin were derived in two different laboratories using pathology data from adult patients in primary care. Prescriptions for 20% albumin infusions were compared one year before and one year after changing the albumin method. Abbott BCP assay had a negative bias of approximately 6 g/L compared with Roche BCG method.There was good agreement (y = 1.04 x - 1.03; R(2 )= 0.9933) between in-house and external laboratory Abbott BCP results. Reference intervals for the serum albumin Abbott BCP assay were 31-45 g/L, different to those recommended by Pathology Harmony and the manufacturers (35-50 g/L). Following the change in method there was a large increase in the number of patients classified as hypoalbuminaemic using Pathology Harmony references intervals (32%) but not when retrospectively compared to locally derived reference intervals (16%) compared with the previous year (12%). The method change was associated with a 44.6% increase in albumin prescriptions. This equated to an annual increase in expenditure of £35,234. We suggest that serum albumin reference intervals be method specific to prevent misclassification of albumin status in patients. Change in albumin methodology may have significant impact on hospital resources. © The Author(s) 2015.
Indirect methods for reference interval determination - review and recommendations.
Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim
2018-04-19
Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.
Nafisi Moghadam, Reza; Amlelshahbaz, Amir Pasha; Namiranian, Nasim; Sobhan-Ardekani, Mohammad; Emami-Meybodi, Mahmood; Dehghan, Ali; Rahmanian, Masoud; Razavi-Ratki, Seid Kazem
2017-12-28
Objective: Ultrasonography (US) and parathyroid scintigraphy (PS) with 99mTc-MIBI are common methods for preoperative localization of parathyroid adenomas but there discrepancies exist with regard to diagnostic accuracy. The aim of the study was to compare PS and US for localization of parathyroid adenoma with a systematic review and meta-analysis of the literature. Methods: Pub Med, Scopus (EMbase), Web of Science and the reference lists of all included studies were searched up to 1st January 2016. The search strategy was according PICO characteristics. Heterogeneity between the studies was accounted by P < 0.1. Point estimates were pooled estimate of sensitivity, specificity and positive predictive value of SPECT and ultrasonography with 99% confidence intervals (CIs) by pooling available data. Data analysis was performed using Meta-DiSc software (version 1.4). Results: Among 188 studies and after deletion of duplicated studies (75), a total of 113 titles and abstracts were studied. From these, 12 studies were selected. The meta-analysis determined a pooled sensitivity for scintigraphy of 83% [99% confidence interval (CI) 96.358 -97.412] and for ultra-sonography of 80% [99% confidence interval (CI) 76-83]. Similar results for specificity were also obtained for both approache. Conclusion: According this meta- analysis, there were no significant differences between the two methods in terms of sensitivity and specificity. There were overlaps in 99% confidence intervals. Also features of the two methods are similar. Creative Commons Attribution License
NASA Technical Reports Server (NTRS)
Makikallio, T. H.; Koistinen, J.; Jordaens, L.; Tulppo, M. P.; Wood, N.; Golosarsky, B.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.
1999-01-01
The traditional methods of analyzing heart rate (HR) variability have failed to predict imminent ventricular fibrillation (VF). We sought to determine whether new methods of analyzing RR interval variability based on nonlinear dynamics and fractal analysis may help to detect subtle abnormalities in RR interval behavior before the onset of life-threatening arrhythmias. RR interval dynamics were analyzed from 24-hour Holter recordings of 15 patients who experienced VF during electrocardiographic recording. Thirty patients without spontaneous or inducible arrhythmia events served as a control group in this retrospective case control study. Conventional time- and frequency-domain measurements, the short-term fractal scaling exponent (alpha) obtained by detrended fluctuation analysis, and the slope (beta) of the power-law regression line (log power - log frequency, 10(-4)-10(-2) Hz) of RR interval dynamics were determined. The short-term correlation exponent alpha of RR intervals (0.64 +/- 0.19 vs 1.05 +/- 0.12; p <0.001) and the power-law slope beta (-1.63 +/- 0.28 vs -1.31 +/- 0.20, p <0.001) were lower in the patients before the onset of VF than in the control patients, but the SD and the low-frequency spectral components of RR intervals did not differ between the groups. The short-term scaling exponent performed better than any other measurement of HR variability in differentiating between the patients with VF and controls. Altered fractal correlation properties of HR behavior precede the spontaneous onset of VF. Dynamic analysis methods of analyzing RR intervals may help to identify abnormalities in HR behavior before VF.
Implementing the measurement interval midpoint method for change estimation
James A. Westfall; Thomas Frieswyk; Douglas M. Griffith
2009-01-01
The adoption of nationally consistent estimation procedures for the Forest Inventory and Analysis (FIA) program mandates changes in the methods used to develop resource trend information. Particularly, it is prescribed that changes in tree status occur at the midpoint of the measurement interval to minimize potential bias. The individual-tree characteristics requiring...
Multilayer Perceptron for Robust Nonlinear Interval Regression Analysis Using Genetic Algorithms
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets. PMID:25110755
Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.
Hu, Yi-Chung
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.
Confidence Intervals for True Scores Using the Skew-Normal Distribution
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin
2014-01-30
The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Multiscale multifractal DCCA and complexity behaviors of return intervals for Potts price model
NASA Astrophysics Data System (ADS)
Wang, Jie; Wang, Jun; Stanley, H. Eugene
2018-02-01
To investigate the characteristics of extreme events in financial markets and the corresponding return intervals among these events, we use a Potts dynamic system to construct a random financial time series model of the attitudes of market traders. We use multiscale multifractal detrended cross-correlation analysis (MM-DCCA) and Lempel-Ziv complexity (LZC) perform numerical research of the return intervals for two significant China's stock market indices and for the proposed model. The new MM-DCCA method is based on the Hurst surface and provides more interpretable cross-correlations of the dynamic mechanism between different return interval series. We scale the LZC method with different exponents to illustrate the complexity of return intervals in different scales. Empirical studies indicate that the proposed return intervals from the Potts system and the real stock market indices hold similar statistical properties.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Lee, Yu-Hao; Hsieh, Ya-Ju; Shiah, Yung-Jong; Lin, Yu-Huei; Chen, Chiao-Yun; Tyan, Yu-Chang; GengQiu, JiaCheng; Hsu, Chung-Yao; Chen, Sharon Chia-Ju
2017-01-01
Abstract To quantitate the meditation experience is a subjective and complex issue because it is confounded by many factors such as emotional state, method of meditation, and personal physical condition. In this study, we propose a strategy with a cross-sectional analysis to evaluate the meditation experience with 2 artificial intelligence techniques: artificial neural network and support vector machine. Within this analysis system, 3 features of the electroencephalography alpha spectrum and variant normalizing scaling are manipulated as the evaluating variables for the detection of accuracy. Thereafter, by modulating the sliding window (the period of the analyzed data) and shifting interval of the window (the time interval to shift the analyzed data), the effect of immediate analysis for the 2 methods is compared. This analysis system is performed on 3 meditation groups, categorizing their meditation experiences in 10-year intervals from novice to junior and to senior. After an exhausted calculation and cross-validation across all variables, the high accuracy rate >98% is achievable under the criterion of 0.5-minute sliding window and 2 seconds shifting interval for both methods. In a word, the minimum analyzable data length is 0.5 minute and the minimum recognizable temporal resolution is 2 seconds in the decision of meditative classification. Our proposed classifier of the meditation experience promotes a rapid evaluation system to distinguish meditation experience and a beneficial utilization of artificial techniques for the big-data analysis. PMID:28422856
Lee, Yu-Hao; Hsieh, Ya-Ju; Shiah, Yung-Jong; Lin, Yu-Huei; Chen, Chiao-Yun; Tyan, Yu-Chang; GengQiu, JiaCheng; Hsu, Chung-Yao; Chen, Sharon Chia-Ju
2017-04-01
To quantitate the meditation experience is a subjective and complex issue because it is confounded by many factors such as emotional state, method of meditation, and personal physical condition. In this study, we propose a strategy with a cross-sectional analysis to evaluate the meditation experience with 2 artificial intelligence techniques: artificial neural network and support vector machine. Within this analysis system, 3 features of the electroencephalography alpha spectrum and variant normalizing scaling are manipulated as the evaluating variables for the detection of accuracy. Thereafter, by modulating the sliding window (the period of the analyzed data) and shifting interval of the window (the time interval to shift the analyzed data), the effect of immediate analysis for the 2 methods is compared. This analysis system is performed on 3 meditation groups, categorizing their meditation experiences in 10-year intervals from novice to junior and to senior. After an exhausted calculation and cross-validation across all variables, the high accuracy rate >98% is achievable under the criterion of 0.5-minute sliding window and 2 seconds shifting interval for both methods. In a word, the minimum analyzable data length is 0.5 minute and the minimum recognizable temporal resolution is 2 seconds in the decision of meditative classification. Our proposed classifier of the meditation experience promotes a rapid evaluation system to distinguish meditation experience and a beneficial utilization of artificial techniques for the big-data analysis.
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
Recurrence interval analysis of trading volumes
NASA Astrophysics Data System (ADS)
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q . The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
Recurrence interval analysis of trading volumes.
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
NASA Astrophysics Data System (ADS)
Javidnia, Katayoun; Parish, Maryam; Karimi, Sadegh; Hemmateenejad, Bahram
2013-03-01
By using FT-IR spectroscopy, many researchers from different disciplines enrich the experimental complexity of their research for obtaining more precise information. Moreover chemometrics techniques have boosted the use of IR instruments. In the present study we aimed to emphasize on the power of FT-IR spectroscopy for discrimination between different oil samples (especially fat from vegetable oils). Also our data were used to compare the performance of different classification methods. FT-IR transmittance spectra of oil samples (Corn, Colona, Sunflower, Soya, Olive, and Butter) were measured in the wave-number interval of 450-4000 cm-1. Classification analysis was performed utilizing PLS-DA, interval PLS-DA, extended canonical variate analysis (ECVA) and interval ECVA methods. The effect of data preprocessing by extended multiplicative signal correction was investigated. Whilst all employed method could distinguish butter from vegetable oils, iECVA resulted in the best performances for calibration and external test set with 100% sensitivity and specificity.
A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2012-01-01
Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…
Wang, Peijie; Zhao, Hui; Sun, Jianguo
2016-12-01
Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.
Monitoring molecular interactions using photon arrival-time interval distribution analysis
Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA
2009-10-06
A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.
NASA Astrophysics Data System (ADS)
Ono, T.; Takahashi, T.
2017-12-01
Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area to predict efficiently and accurately. The river flood analysis by using this proposed method will contribute to mitigate flood disaster by improving the accuracy of estimated inundation area.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
NASA Astrophysics Data System (ADS)
Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan
2017-09-01
Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.
Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470
Method of high precision interval measurement in pulse laser ranging system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong
2013-09-01
Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
A novel implementation of homodyne time interval analysis method for primary vibration calibration
NASA Astrophysics Data System (ADS)
Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo
2011-12-01
In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
Nie, Xianghui; Huang, Guo H; Li, Yongping
2009-11-01
This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Fuzzy rationality and parameter elicitation in decision analysis
NASA Astrophysics Data System (ADS)
Nikolova, Natalia D.; Tenekedjiev, Kiril I.
2010-07-01
It is widely recognised by decision analysts that real decision-makers always make estimates in an interval form. An overview of techniques to find an optimal alternative among such with imprecise and interval probabilities is presented. Scalarisation methods are outlined as most appropriate. A proper continuation of such techniques is fuzzy rational (FR) decision analysis. A detailed representation of the elicitation process influenced by fuzzy rationality is given. The interval character of probabilities leads to the introduction of ribbon functions, whose general form and special cases are compared with the p-boxes. As demonstrated, approximation of utilities in FR decision analysis does not depend on the probabilities, but the approximation of probabilities is dependent on preferences.
Power in Bayesian Mediation Analysis for Small Sample Research
Miočević, Milica; MacKinnon, David P.; Levy, Roy
2018-01-01
It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296
Power in Bayesian Mediation Analysis for Small Sample Research.
Miočević, Milica; MacKinnon, David P; Levy, Roy
2017-01-01
It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Marques Junior, Jucelino Medeiros; Muller, Aline Lima Hermes; Foletto, Edson Luiz; da Costa, Adilson Ben; Bizzi, Cezar Augusto; Irineu Muller, Edson
2015-01-01
A method for determination of propranolol hydrochloride in pharmaceutical preparation using near infrared spectrometry with fiber optic probe (FTNIR/PROBE) and combined with chemometric methods was developed. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). The treatments based on the mean centered data and multiplicative scatter correction (MSC) were selected for models construction. A root mean square error of prediction (RMSEP) of 8.2 mg g(-1) was achieved using siPLS (s2i20PLS) algorithm with spectra divided into 20 intervals and combination of 2 intervals (8501 to 8801 and 5201 to 5501 cm(-1)). Results obtained by the proposed method were compared with those using the pharmacopoeia reference method and significant difference was not observed. Therefore, proposed method allowed a fast, precise, and accurate determination of propranolol hydrochloride in pharmaceutical preparations. Furthermore, it is possible to carry out on-line analysis of this active principle in pharmaceutical formulations with use of fiber optic probe.
Jackson, Dan; Bowden, Jack
2016-09-07
Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.
An analysis of USSPACECOM's space surveillance network sensor tasking methodology
NASA Astrophysics Data System (ADS)
Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.
1992-12-01
This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved MRF. Copyright © 2017 Elsevier Inc. All rights reserved.
Javidnia, Katayoun; Parish, Maryam; Karimi, Sadegh; Hemmateenejad, Bahram
2013-03-01
By using FT-IR spectroscopy, many researchers from different disciplines enrich the experimental complexity of their research for obtaining more precise information. Moreover chemometrics techniques have boosted the use of IR instruments. In the present study we aimed to emphasize on the power of FT-IR spectroscopy for discrimination between different oil samples (especially fat from vegetable oils). Also our data were used to compare the performance of different classification methods. FT-IR transmittance spectra of oil samples (Corn, Colona, Sunflower, Soya, Olive, and Butter) were measured in the wave-number interval of 450-4000 cm(-1). Classification analysis was performed utilizing PLS-DA, interval PLS-DA, extended canonical variate analysis (ECVA) and interval ECVA methods. The effect of data preprocessing by extended multiplicative signal correction was investigated. Whilst all employed method could distinguish butter from vegetable oils, iECVA resulted in the best performances for calibration and external test set with 100% sensitivity and specificity. Copyright © 2012 Elsevier B.V. All rights reserved.
On Latent Change Model Choice in Longitudinal Studies
ERIC Educational Resources Information Center
Raykov, Tenko; Zajacova, Anna
2012-01-01
An interval estimation procedure for proportion of explained observed variance in latent curve analysis is discussed, which can be used as an aid in the process of choosing between linear and nonlinear models. The method allows obtaining confidence intervals for the R[squared] indexes associated with repeatedly followed measures in longitudinal…
Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.
Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling
2016-03-01
A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.
Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China
Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li’an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling
2016-01-01
Abstract A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box–Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China. PMID:26945390
Interval analysis of interictal EEG: pathology of the alpha rhythm in focal epilepsy
NASA Astrophysics Data System (ADS)
Pyrzowski, Jan; Siemiński, Mariusz; Sarnowska, Anna; Jedrzejczak, Joanna; Nyka, Walenty M.
2015-11-01
The contemporary use of interictal scalp electroencephalography (EEG) in the context of focal epilepsy workup relies on the visual identification of interictal epileptiform discharges. The high-specificity performance of this marker comes, however, at a cost of only moderate sensitivity. Zero-crossing interval analysis is an alternative to Fourier analysis for the assessment of the rhythmic component of EEG signals. We applied this method to standard EEG recordings of 78 patients divided into 4 subgroups: temporal lobe epilepsy (TLE), frontal lobe epilepsy (FLE), psychogenic nonepileptic seizures (PNES) and nonepileptic patients with headache. Interval-analysis based markers were capable of effectively discriminating patients with epilepsy from those in control subgroups (AUC~0.8) with diagnostic sensitivity potentially exceeding that of visual analysis. The identified putative epilepsy-specific markers were sensitive to the properties of the alpha rhythm and displayed weak or non-significant dependences on the number of antiepileptic drugs (AEDs) taken by the patients. Significant AED-related effects were concentrated in the theta interval range and an associated marker allowed for identification of patients on AED polytherapy (AUC~0.9). Interval analysis may thus, in perspective, increase the diagnostic yield of interictal scalp EEG. Our findings point to the possible existence of alpha rhythm abnormalities in patients with epilepsy.
1985-03-01
distribution. Samples of suspended partici’lates will also be collected for later image and elemental analysis . 25 Method of analysis for particle...will be flow injection analysis . This method will allow rapid, continuous analysis of seawater nutrients. Measurements will be made at one minute...5 m intervals) as well as from the underway pumping system. Method of pigment analysis for porphyrin and carotenoid pigments will be separation by
A method for meta-analysis of epidemiological studies.
Einarson, T R; Leeder, J S; Koren, G
1988-10-01
This article presents a stepwise approach for conducting a meta-analysis of epidemiological studies based on proposed guidelines. This systematic method is recommended for practitioners evaluating epidemiological studies in the literature to arrive at an overall quantitative estimate of the impact of a treatment. Bendectin is used as an illustrative example. Meta-analysts should establish a priori the purpose of the analysis and a complete protocol. This protocol should be adhered to, and all steps performed should be recorded in detail. To aid in developing such a protocol, we present methods the researcher can use to perform each of 22 steps in six major areas. The illustrative meta-analysis confirmed previous traditional narrative literature reviews that Bendectin is not related to teratogenic outcomes in humans. The overall summary odds ratio was 1.01 (chi 2 = 0.05, p = 0.815) with a 95 percent confidence interval of 0.66-1.55. When the studies were separated according to study type, the summary odds ratio for cohort studies was 0.95 with a 95 percent confidence interval of 0.62-1.45. For case-control studies, the summary odds ratio was 1.27 with a 95 percent confidence interval of 0.83-1.94. The corresponding chi-square values were not statistically significant at the p = 0.05 level.
NASA Astrophysics Data System (ADS)
Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang
2017-12-01
Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.
Zhang, Ji; Li, Bing; Wang, Qi; Wei, Xin; Feng, Weibo; Chen, Yijiu; Huang, Ping; Wang, Zhenyuan
2017-12-21
Postmortem interval (PMI) evaluation remains a challenge in the forensic community due to the lack of efficient methods. Studies have focused on chemical analysis of biofluids for PMI estimation; however, no reports using spectroscopic methods in pericardial fluid (PF) are available. In this study, Fourier transform infrared (FTIR) spectroscopy with attenuated total reflectance (ATR) accessory was applied to collect comprehensive biochemical information from rabbit PF at different PMIs. The PMI-dependent spectral signature was determined by two-dimensional (2D) correlation analysis. The partial least square (PLS) and nu-support vector machine (nu-SVM) models were then established based on the acquired spectral dataset. Spectral variables associated with amide I, amide II, COO - , C-H bending, and C-O or C-OH vibrations arising from proteins, polypeptides, amino acids and carbohydrates, respectively, were susceptible to PMI in 2D correlation analysis. Moreover, the nu-SVM model appeared to achieve a more satisfactory prediction than the PLS model in calibration; the reliability of both models was determined in an external validation set. The study shows the possibility of application of ATR-FTIR methods in postmortem interval estimation using PF samples.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
An analysis of general chain systems
NASA Technical Reports Server (NTRS)
Passerello, C. E.; Huston, R. L.
1972-01-01
A general analysis of dynamic systems consisting of connected rigid bodies is presented. The number of bodies and their manner of connection is arbitrary so long as no closed loops are formed. The analysis represents a dynamic finite element method, which is computer-oriented and designed so that nonworking, interval constraint forces are automatically eliminated. The method is based upon Lagrange's form of d'Alembert's principle. Shifter matrix transformations are used with the geometrical aspects of the analysis. The method is illustrated with a space manipulator.
NASA Technical Reports Server (NTRS)
Huikuri, H. V.; Makikallio, T. H.; Peng, C. K.; Goldberger, A. L.; Hintze, U.; Moller, M.
2000-01-01
BACKGROUND: Preliminary data suggest that the analysis of R-R interval variability by fractal analysis methods may provide clinically useful information on patients with heart failure. The purpose of this study was to compare the prognostic power of new fractal and traditional measures of R-R interval variability as predictors of death after acute myocardial infarction. METHODS AND RESULTS: Time and frequency domain heart rate (HR) variability measures, along with short- and long-term correlation (fractal) properties of R-R intervals (exponents alpha(1) and alpha(2)) and power-law scaling of the power spectra (exponent beta), were assessed from 24-hour Holter recordings in 446 survivors of acute myocardial infarction with a depressed left ventricular function (ejection fraction =35%). During a mean+/-SD follow-up period of 685+/-360 days, 114 patients died (25.6%), with 75 deaths classified as arrhythmic (17.0%) and 28 as nonarrhythmic (6.3%) cardiac deaths. Several traditional and fractal measures of R-R interval variability were significant univariate predictors of all-cause mortality. Reduced short-term scaling exponent alpha(1) was the most powerful R-R interval variability measure as a predictor of all-cause mortality (alpha(1) <0.75, relative risk 3.0, 95% confidence interval 2.5 to 4.2, P<0.001). It remained an independent predictor of death (P<0.001) after adjustment for other postinfarction risk markers, such as age, ejection fraction, NYHA class, and medication. Reduced alpha(1) predicted both arrhythmic death (P<0.001) and nonarrhythmic cardiac death (P<0.001). CONCLUSIONS: Analysis of the fractal characteristics of short-term R-R interval dynamics yields more powerful prognostic information than the traditional measures of HR variability among patients with depressed left ventricular function after an acute myocardial infarction.
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
Hamilton, Lindsay; Franklin, Robin J M; Jeffery, Nick D
2007-09-18
Clinical spinal cord injury in domestic dogs provides a model population in which to test the efficacy of putative therapeutic interventions for human spinal cord injury. To achieve this potential a robust method of functional analysis is required so that statistical comparison of numerical data derived from treated and control animals can be achieved. In this study we describe the use of digital motion capture equipment combined with mathematical analysis to derive a simple quantitative parameter - 'the mean diagonal coupling interval' - to describe coordination between forelimb and hindlimb movement. In normal dogs this parameter is independent of size, conformation, speed of walking or gait pattern. We show here that mean diagonal coupling interval is highly sensitive to alterations in forelimb-hindlimb coordination in dogs that have suffered spinal cord injury, and can be accurately quantified, but is unaffected by orthopaedic perturbations of gait. Mean diagonal coupling interval is an easily derived, highly robust measurement that provides an ideal method to compare the functional effect of therapeutic interventions after spinal cord injury in quadrupeds.
Estimating clinical chemistry reference values based on an existing data set of unselected animals.
Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe
2008-11-01
In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.
Tsukerman, B M; Finkel'shteĭn, I E
1987-07-01
A statistical analysis of prolonged ECG records has been carried out in patients with various heart rhythm and conductivity disorders. The distribution of absolute R-R duration values and relationships between adjacent intervals have been examined. A two-step algorithm has been constructed that excludes anomalous and "suspicious" intervals from a sample of consecutively recorded R-R intervals, until only the intervals between contractions of veritably sinus origin remain in the sample. The algorithm has been developed into a programme for microcomputer Electronica NC-80. It operates reliably even in cases of complex combined rhythm and conductivity disorders.
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
A comparison of hand-wrist bone and cervical vertebral analyses in measuring skeletal maturation.
Gandini, Paola; Mancini, Marta; Andreani, Federico
2006-11-01
To compare skeletal maturation as measured by hand-wrist bone analysis and by cervical vertebral analysis. A radiographic hand-wrist bone analysis and cephalometric cervical vertebral analysis of 30 patients (14 males and 16 females; 7-18 years of age) were examined. The hand-wrist bone analysis was evaluated by the Bjork index, whereas the cervical vertebral analysis was assessed by the cervical vertebral maturation stage (CVMS) method. To define vertebral stages, the analysis consisted of both cephalometric (13 points) and morphologic evaluation of three cervical vertebrae (concavity of second, third, and fourth vertebrae and shape of third and fourth vertebrae). These measurements were then compared with the hand-wrist bone analysis, and the results were statistically analyzed by the Cohen kappa concordance index. The same procedure was repeated after 6 months and showed identical results. The Cohen kappa index obtained (mean +/- SD) was 0.783 +/- 0.098, which is in the significant range. The results show a concordance of 83.3%, considering that the estimated percentage for each case is 23.3%. The results also show a correlation of CVMS I with Bjork stages 1-3 (interval A), CVMS II with Bjork stage 4 (interval B), CVMS III with Bjork stage 5 (interval C), CVMS IV with Bjork stages 6 and 7 (interval D), and CVMS V with Bjork stages 8 and 9 (interval E). Vertebral analysis on a lateral cephalogram is as valid as the hand-wrist bone analysis with the advantage of reducing the radiation exposure of growing subjects.
Optimizing structure of complex technical system by heterogeneous vector criterion in interval form
NASA Astrophysics Data System (ADS)
Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.
2018-05-01
The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.
Pediatric Reference Intervals for Free Thyroxine and Free Triiodothyronine
Jang, Megan; Guo, Tiedong; Soldin, Steven J.
2009-01-01
Background The clinical value of free thyroxine (FT4) and free triiodothyronine (FT3) analysis depends on the reference intervals with which they are compared. We determined age- and sex-specific reference intervals for neonates, infants, and children 0–18 years of age for FT4 and FT3 using tandem mass spectrometry. Methods Reference intervals were calculated for serum FT4 (n = 1426) and FT3 (n = 1107) obtained from healthy children between January 1, 2008, and June 30, 2008, from Children's National Medical Center and Georgetown University Medical Center Bioanalytical Core Laboratory, Washington, DC. Serum samples were analyzed using isotope dilution liquid chromatography tandem mass spectrometry (LC/MS/MS) with deuterium-labeled internal standards. Results FT4 reference intervals were very similar for males and females of all ages and ranged between 1.3 and 2.4 ng/dL for children 1 to 18 years old. FT4 reference intervals for 1- to 12-month-old infants were 1.3–2.8 ng/dL. These 2.5 to 97.5 percentile intervals were much tighter than reference intervals obtained using immunoassay platforms 0.48–2.78 ng/dL for males and 0.85–2.09 ng/dL for females. Similarly, FT3 intervals were consistent and similar for males and females and for all ages, ranging between 1.5 pg/mL and approximately 6.0 pg/mL for children 1 month of age to 18 years old. Conclusions This is the first study to provide pediatric reference intervals of FT4 and FT3 for children from birth to 18 years of age using LC/MS/MS. Analysis using LC/MS/MS provides more specific quantification of thyroid hormones. A comparison of the ultrafiltration tandem mass spectrometric method with equilibrium dialysis showed very good correlation. PMID:19583487
Statistical physics approaches to financial fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong
2009-12-01
Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.
van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J
2017-03-01
This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.
Development of a New Paradigm for Analysis of Disdrometric Data
NASA Astrophysics Data System (ADS)
Larsen, Michael L.; Kostinski, Alexander B.
2017-04-01
A number of disdrometers currently on the market are able to characterize hydrometeors on a drop-by-drop basis with arrival timestamps associated with each arriving hydrometeor. This allows an investigator to parse a time series into disjoint intervals that have equal numbers of drops, instead of the traditional subdivision into equal time intervals. Such a "fixed-N" partitioning of the data can provide several advantages over the traditional equal time binning method, especially within the context of quantifying measurement uncertainty (which typically scales with the number of hydrometeors in each sample). An added bonus is the natural elimination of measurements that are devoid of all drops. This analysis method is investigated by utilizing data from a dense array of disdrometers located near Charleston, South Carolina, USA. Implications for the usefulness of this method in future studies are explored.
Effects of High Intensity Interval Training on Increasing Explosive Power, Speed, and Agility
NASA Astrophysics Data System (ADS)
Fajrin, F.; Kusnanik, N. W.; Wijono
2018-01-01
High Intensity Interval Training (HIIT) is a type of exercise that combines high-intensity exercise and low intensity exercise in a certain time interval. This type of training is very effective and efficient to improve the physical components. The process of improving athletes achievement related to how the process of improving the physical components, so the selection of a good practice method will be very helpful. This study aims to analyze how is the effects of HIIT on increasing explosive power, speed, and agility. This type of research is quantitative with quasi-experimental methods. The design of this study used the Matching-Only Design, with data analysis using the t-test (paired sample t-test). After being given the treatment for six weeks, the results showed there are significant increasing in explosive power, speed, and agility. HIIT in this study used a form of exercise plyometric as high-intensity exercise and jogging as mild or moderate intensity exercise. Increase was due to the improvement of neuromuscular characteristics that affect the increase in muscle strength and performance. From the data analysis, researchers concluded that, Exercises of High Intensity Interval Training significantly effect on the increase in Power Limbs, speed, and agility.
A PDF-based classification of gait cadence patterns in patients with amyotrophic lateral sclerosis.
Wu, Yunfeng; Ng, Sin Chun
2010-01-01
Amyotrophic lateral sclerosis (ALS) is a type of neurological disease due to the degeneration of motor neurons. During the course of such a progressive disease, it would be difficult for ALS patients to regulate normal locomotion, so that the gait stability becomes perturbed. This paper presents a pilot statistical study on the gait cadence (or stride interval) in ALS, based on the statistical analysis method. The probability density functions (PDFs) of stride interval were first estimated with the nonparametric Parzen-window method. We computed the mean of the left-foot stride interval and the modified Kullback-Leibler divergence (MKLD) from the PDFs estimated. The analysis results suggested that both of these two statistical parameters were significantly altered in ALS, and the least-squares support vector machine (LS-SVM) may effectively distinguish the stride patterns between the ALS patients and healthy controls, with an accurate rate of 82.8% and an area of 0.87 under the receiver operating characteristic curve.
Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond
Wiens, Stefan; Nilsson, Mats E.
2016-01-01
Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes? PMID:29805179
Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan
2016-01-01
Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102
Fung, Tak; Keenan, Kevin
2014-01-01
The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
Visibility graph analysis of heart rate time series and bio-marker of congestive heart failure
NASA Astrophysics Data System (ADS)
Bhaduri, Anirban; Bhaduri, Susmita; Ghosh, Dipak
2017-09-01
Study of RR interval time series for Congestive Heart Failure had been an area of study with different methods including non-linear methods. In this article the cardiac dynamics of heart beat are explored in the light of complex network analysis, viz. visibility graph method. Heart beat (RR Interval) time series data taken from Physionet database [46, 47] belonging to two groups of subjects, diseased (congestive heart failure) (29 in number) and normal (54 in number) are analyzed with the technique. The overall results show that a quantitative parameter can significantly differentiate between the diseased subjects and the normal subjects as well as different stages of the disease. Further, the data when split into periods of around 1 hour each and analyzed separately, also shows the same consistent differences. This quantitative parameter obtained using the visibility graph analysis thereby can be used as a potential bio-marker as well as a subsequent alarm generation mechanism for predicting the onset of Congestive Heart Failure.
Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan
2014-01-01
Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750
Reference Intervals of Hematology and Clinical Chemistry Analytes for 1-Year-Old Korean Children
Lee, Hye Ryun; Roh, Eun Youn; Chang, Ju Young
2016-01-01
Background Reference intervals need to be established according to age. We established reference intervals of hematology and chemistry from community-based healthy 1-yr-old children and analyzed their iron status according to the feeding methods during the first six months after birth. Methods A total of 887 children who received a medical check-up between 2010 and 2014 at Boramae Hospital (Seoul, Korea) were enrolled. A total of 534 children (247 boys and 287 girls) were enrolled as reference individuals after the exclusion of data obtained from children with suspected iron deficiency. Hematology and clinical chemistry analytes were measured, and the reference value of each analyte was estimated by using parametric (mean±2 SD) or nonparametric methods (2.5-97.5th percentile). Iron, total iron-binding capacity, and ferritin were measured, and transferrin saturation was calculated. Results As there were no differences in the mean values between boys and girls, we established the reference intervals for 1-yr-old children regardless of sex. The analysis of serum iron status according to feeding methods during the first six months revealed higher iron, ferritin, and transferrin saturation levels in children exclusively or mainly fed formula than in children exclusively or mainly fed breast milk. Conclusions We established reference intervals of hematology and clinical chemistry analytes from community-based healthy children at one year of age. These reference intervals will be useful for interpreting results of medical check-ups at one year of age. PMID:27374715
Sharifi, Maryam; Ghassemi, Amirreza; Bayani, Shahin
2015-01-01
Success of orthodontic miniscrews in providing stable anchorage is dependent on their stability. The purpose of this study was to assess the effect of insertion method and postinsertion time interval on the removal torque of miniscrews as an indicator of their stability. Seventy-two miniscrews (Jeil Medical) were inserted into the femoral bones of three male German Shepherd dogs and assigned to nine groups of eight miniscrews. Three insertion methods, including hand-driven, motor-driven with 5.0-Ncm insertion torque, and motor-driven with 20.0-Ncm insertion torque, were tested. Three time intervals of 0, 2, and 6 weeks between miniscrew insertion and removal were tested as well. Removal torque values were measured in newton centimeters by a removal torque tester (IMADA). Data were analyzed by one-way analysis of variance (ANOVA) followed by the Bonferroni post hoc test at a .05 level of significance. A miniscrew survival rate of 93% was observed in this study. The highest mean value of removal torque among the three postinsertion intervals (2.4 ± 0.59 Ncm) was obtained immediately after miniscrew insertion with a statistically significant difference from the other two time intervals (P < .001). Insertion were observed in this regard (P = .46). The stability of miniscrews was not affected by the insertion method. However, of the postinsertion time intervals, the highest removal torque values were obtained immediately after insertion.
Xie, Bin; Yan, Xianfeng
2017-01-01
Purpose. The aim of this study was to compare the effects of high-intensity interval training (INTERVAL) and moderate-intensity continuous training (CONTINUOUS) on aerobic capacity in cardiac patients. Methods. A meta-analysis identified by searching the PubMed, Cochrane Library, EMBASE, and Web of Science databases from inception through December 2016 compared the effects of INTERVAL and CONTINUOUS among cardiac patients. Results. Twenty-one studies involving 736 participants with cardiac diseases were included. Compared with CONTINUOUS, INTERVAL was associated with greater improvement in peak VO2 (mean difference 1.76 mL/kg/min, 95% confidence interval 1.06 to 2.46 mL/kg/min, p < 0.001) and VO2 at AT (mean difference 0.90 mL/kg/min, 95% confidence interval 0.0 to 1.72 mL/kg/min, p = 0.03). No significant difference between the INTERVAL and CONTINUOUS groups was observed in terms of peak heart rate, peak minute ventilation, VE/VCO2 slope and respiratory exchange ratio, body mass, systolic or diastolic blood pressure, triglyceride or low- or high-density lipoprotein cholesterol level, flow-mediated dilation, or left ventricular ejection fraction. Conclusions. This study showed that INTERVAL improves aerobic capacity more effectively than does CONTINUOUS in cardiac patients. Further studies with larger samples are needed to confirm our observations. PMID:28386556
Wang, Gaopin; Liu, Renguang; Chang, Qinghua; Xu, Zhaolong; Zhang, Yingjie; Pan, Dianzhu
2017-03-15
The micro waveform of His bundle potential can't be recorded beat-to-beat on surface electrocardiogram yet. We have found that the micro-wavelets before QRS complex may be related to atrioventricular conduction system potentials. This study is to explore the possibility of His bundle potential can be noninvasively recorded on surface electrocardiogram. We randomized 65 patients undergoing radiofrequency catheter ablation of paroxysmal superventricular tachycardia (exclude overt Wolff-Parkinson-White syndrome) to receive "conventional electrocardiogram" and "new electrocardiogram" before the procedure. His bundle electrogram was collected during the procedure. Comparative analysis of PA s (PA interval recorded on surface electrocardiogram), AH s (AH interval recorded on surface electrocardiogram) and HV s (HV interval recorded on surface electrocardiogram) interval recorded on surface "new electrocardiogram" and PA, AH, HV interval recorded on His bundle electrogram was investigated. There was no difference (P > 0.05) between groups in HV s interval (49.63 ± 6.19 ms) and HV interval (49.35 ± 6.49 ms). Results of correlational analysis found that HV S interval was significantly positively associated with HV interval (r = 0.929; P < 0.01). His bundle potentials can be noninvasively recorded on surface electrocardiogram. Noninvasive His bundle potential tracing might represent a new method for locating the site of atrioventricular block and identifying the origin of a wide QRS complex.
Dynamical analysis of the avian-human influenza epidemic model using the semi-analytical method
NASA Astrophysics Data System (ADS)
Jabbari, Azizeh; Kheiri, Hossein; Bekir, Ahmet
2015-03-01
In this work, we present a dynamic behavior of the avian-human influenza epidemic model by using efficient computational algorithm, namely the multistage differential transform method(MsDTM). The MsDTM is used here as an algorithm for approximating the solutions of the avian-human influenza epidemic model in a sequence of time intervals. In order to show the efficiency of the method, the obtained numerical results are compared with the fourth-order Runge-Kutta method (RK4M) and differential transform method(DTM) solutions. It is shown that the MsDTM has the advantage of giving an analytical form of the solution within each time interval which is not possible in purely numerical techniques like RK4M.
Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D
2017-11-01
Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.
The orbital evolution of NEA 30825 1900 TG1
NASA Astrophysics Data System (ADS)
Timoshkova, E. I.
2008-02-01
The orbital evolution of the near-Earth asteroid (NEA) 30825 1990 TG1 has been studied by numerical integration of the equations of its motion over the 100 000-year time interval with allowance for perturbations from eight major planets and Pluto, and the variations in its osculating orbit over this time interval were determined. The numerical integrations were performed using two methods: the Bulirsch-Stoer method and the Everhart method. The comparative analysis of the two resulting orbital evolutions of motion is presented for the time interval examined. The evolution of the asteroid motion is qualitatively the same for both variants, but the rate of evolution of the orbital elements is different. Our research confirms the known fact that the application of different integrators to the study of the long-term evolution of the NEA orbit may lead to different evolution tracks.
Huffman, Raegan L.
2002-01-01
Ground-water samples were collected in April 1999 at Naval Air Station Whidbey Island, Washington, with passive diffusion samplers and a submersible pump to compare concentrations of volatile organic compounds (VOCs) in water samples collected using the two sampling methods. Single diffusion samplers were installed in wells with 10-foot screened intervals, and multiple diffusion samplers were installed in wells with 20- to 40-foot screened intervals. The diffusion samplers were recovered after 20 days and the wells were then sampled using a submersible pump. VOC concentrations in the 10-foot screened wells in water samples collected with diffusion samplers closely matched concentrations in samples collected with the submersible pump. Analysis of VOC concentrations in samples collected from the 20- to 40-foot screened wells with multiple diffusion samplers indicated vertical concentration variation within the screened interval, whereas the analysis of VOC concentrations in samples collected with the submersible pump indicated mixing during pumping. The results obtained using the two sampling methods indicate that the samples collected with the diffusion samplers were comparable with and can be considerably less expensive than samples collected using a submersible pump.
Guo, Guang-Hui; Wu, Feng-Chang; He, Hong-Ping; Feng, Cheng-Lian; Zhang, Rui-Qing; Li, Hui-Xian
2012-04-01
Probabilistic approaches, such as Monte Carlo Sampling (MCS) and Latin Hypercube Sampling (LHS), and non-probabilistic approaches, such as interval analysis, fuzzy set theory and variance propagation, were used to characterize uncertainties associated with risk assessment of sigma PAH8 in surface water of Taihu Lake. The results from MCS and LHS were represented by probability distributions of hazard quotients of sigma PAH8 in surface waters of Taihu Lake. The probabilistic distribution of hazard quotient were obtained from the results of MCS and LHS based on probabilistic theory, which indicated that the confidence intervals of hazard quotient at 90% confidence level were in the range of 0.000 18-0.89 and 0.000 17-0.92, with the mean of 0.37 and 0.35, respectively. In addition, the probabilities that the hazard quotients from MCS and LHS exceed the threshold of 1 were 9.71% and 9.68%, respectively. The sensitivity analysis suggested the toxicity data contributed the most to the resulting distribution of quotients. The hazard quotient of sigma PAH8 to aquatic organisms ranged from 0.000 17 to 0.99 using interval analysis. The confidence interval was (0.001 5, 0.016 3) at the 90% confidence level calculated using fuzzy set theory, and the confidence interval was (0.000 16, 0.88) at the 90% confidence level based on the variance propagation. These results indicated that the ecological risk of sigma PAH8 to aquatic organisms were low. Each method has its own set of advantages and limitations, which was based on different theory; therefore, the appropriate method should be selected on a case-by-case to quantify the effects of uncertainties on the ecological risk assessment. Approach based on the probabilistic theory was selected as the most appropriate method to assess the risk of sigma PAH8 in surface water of Taihu Lake, which provided an important scientific foundation of risk management and control for organic pollutants in water.
Chu, Catherine. J.; Chan, Arthur; Song, Dan; Staley, Kevin J.; Stufflebeam, Steven M.; Kramer, Mark A.
2017-01-01
Summary Background High frequency oscillations are emerging as a clinically important indicator of epileptic networks. However, manual detection of these high frequency oscillations is difficult, time consuming, and subjective, especially in the scalp EEG, thus hindering further clinical exploration and application. Semi-automated detection methods augment manual detection by reducing inspection to a subset of time intervals. We propose a new method to detect high frequency oscillations that co-occur with interictal epileptiform discharges. New Method The new method proceeds in two steps. The first step identifies candidate time intervals during which high frequency activity is increased. The second step computes a set of seven features for each candidate interval. These features require that the candidate event contain a high frequency oscillation approximately sinusoidal in shape, with at least three cycles, that co-occurs with a large amplitude discharge. Candidate events that satisfy these features are stored for validation through visual analysis. Results We evaluate the detector performance in simulation and on ten examples of scalp EEG data, and show that the proposed method successfully detects spike-ripple events, with high positive predictive value, low false positive rate, and high intra-rater reliability. Comparison with Existing Method The proposed method is less sensitive than the existing method of visual inspection, but much faster and much more reliable. Conclusions Accurate and rapid detection of high frequency activity increases the clinical viability of this rhythmic biomarker of epilepsy. The proposed spike-ripple detector rapidly identifies candidate spike-ripple events, thus making clinical analysis of prolonged, multielectrode scalp EEG recordings tractable. PMID:27988323
Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W
2016-05-01
In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Asymmetric multiscale detrended fluctuation analysis of California electricity spot price
NASA Astrophysics Data System (ADS)
Fan, Qingju
2016-01-01
In this paper, we develop a new method called asymmetric multiscale detrended fluctuation analysis, which is an extension of asymmetric detrended fluctuation analysis (A-DFA) and can assess the asymmetry correlation properties of series with a variable scale range. We investigate the asymmetric correlations in California 1999-2000 power market after filtering some periodic trends by empirical mode decomposition (EMD). Our findings show the coexistence of symmetric and asymmetric correlations in the price series of 1999 and strong asymmetric correlations in 2000. What is more, we detect subtle correlation properties of the upward and downward price series for most larger scale intervals in 2000. Meanwhile, the fluctuations of Δα(s) (asymmetry) and | Δα(s) | (absolute asymmetry) are more significant in 2000 than that in 1999 for larger scale intervals, and they have similar characteristics for smaller scale intervals. We conclude that the strong asymmetry property and different correlation properties of upward and downward price series for larger scale intervals in 2000 have important implications on the collapse of California power market, and our findings shed a new light on the underlying mechanisms of power price.
Howard, Elizabeth J; Harville, Emily; Kissinger, Patricia; Xiong, Xu
2013-07-01
There is growing interest in the application of propensity scores (PS) in epidemiologic studies, especially within the field of reproductive epidemiology. This retrospective cohort study assesses the impact of a short interpregnancy interval (IPI) on preterm birth and compares the results of the conventional logistic regression analysis with analyses utilizing a PS. The study included 96,378 singleton infants from Louisiana birth certificate data (1995-2007). Five regression models designed for methods comparison are presented. Ten percent (10.17 %) of all births were preterm; 26.83 % of births were from a short IPI. The PS-adjusted model produced a more conservative estimate of the exposure variable compared to the conventional logistic regression method (β-coefficient: 0.21 vs. 0.43), as well as a smaller standard error (0.024 vs. 0.028), odds ratio and 95 % confidence intervals [1.15 (1.09, 1.20) vs. 1.23 (1.17, 1.30)]. The inclusion of more covariate and interaction terms in the PS did not change the estimates of the exposure variable. This analysis indicates that PS-adjusted regression may be appropriate for validation of conventional methods in a large dataset with a fairly common outcome. PS's may be beneficial in producing more precise estimates, especially for models with many confounders and effect modifiers and where conventional adjustment with logistic regression is unsatisfactory. Short intervals between pregnancies are associated with preterm birth in this population, according to either technique. Birth spacing is an issue that women have some control over. Educational interventions, including birth control, should be applied during prenatal visits and following delivery.
Influence analysis in quantitative trait loci detection.
Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko
2014-07-01
This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Alpermann, Anke; Huber, Walter; Natke, Ulrich; Willmes, Klaus
2010-09-01
Improved fluency after stuttering therapy is usually measured by the percentage of stuttered syllables. However, outcome studies rarely evaluate the use of trained speech patterns that speakers use to manage stuttering. This study investigated whether the modified time interval analysis can distinguish between trained speech patterns, fluent speech, and stuttered speech. Seventeen German experts on stuttering judged a speech sample on two occasions. Speakers of the sample were stuttering adults, who were not undergoing therapy, as well as participants in a fluency shaping and a stuttering modification therapy. Results showed satisfactory inter-judge and intra-judge agreement above 80%. Intervals with trained speech patterns were identified as consistently as stuttered and fluent intervals. We discuss limitations of the study, as well as implications of our findings for the development of training for identification of trained speech patterns and future outcome studies. The reader will be able to (a) explain different methods to measure the use of trained speech patterns, (b) evaluate whether German experts are able to discriminate intervals with trained speech patterns reliably from fluent and stuttered intervals and (c) describe how the measurement of trained speech patterns can contribute to outcome studies.
Extended time-interval analysis
NASA Astrophysics Data System (ADS)
Fynbo, H. O. U.; Riisager, K.
2014-01-01
Several extensions of the halflife analysis method recently suggested by Horvat and Hardy are put forward. Goodness-of-fit testing is included, and the method is extended to cases where more information is available for each decay event which allows applications also for e.g. γ decay data. The results are tested with Monte Carlo simulations and are applied to the decays of 64Cu and 56Mn.
Item Factor Analysis: Current Approaches and Future Directions
ERIC Educational Resources Information Center
Wirth, R. J.; Edwards, Michael C.
2007-01-01
The rationale underlying factor analysis applies to continuous and categorical variables alike; however, the models and estimation methods for continuous (i.e., interval or ratio scale) data are not appropriate for item-level data that are categorical in nature. The authors provide a targeted review and synthesis of the item factor analysis (IFA)…
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-08-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Analysis of noise-induced temporal correlations in neuronal spike sequences
NASA Astrophysics Data System (ADS)
Reinoso, José A.; Torrent, M. C.; Masoller, Cristina
2016-11-01
We investigate temporal correlations in sequences of noise-induced neuronal spikes, using a symbolic method of time-series analysis. We focus on the sequence of time-intervals between consecutive spikes (inter-spike-intervals, ISIs). The analysis method, known as ordinal analysis, transforms the ISI sequence into a sequence of ordinal patterns (OPs), which are defined in terms of the relative ordering of consecutive ISIs. The ISI sequences are obtained from extensive simulations of two neuron models (FitzHugh-Nagumo, FHN, and integrate-and-fire, IF), with correlated noise. We find that, as the noise strength increases, temporal order gradually emerges, revealed by the existence of more frequent ordinal patterns in the ISI sequence. While in the FHN model the most frequent OP depends on the noise strength, in the IF model it is independent of the noise strength. In both models, the correlation time of the noise affects the OP probabilities but does not modify the most probable pattern.
Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter
2011-04-13
The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.
Sheela, Shekaraiah; Aithal, Venkataraja U; Rajashekhar, Bellur; Lewis, Melissa Glenda
2016-01-01
Tracheoesophageal (TE) prosthetic voice is one of the voice restoration options for individuals who have undergone a total laryngectomy. Aerodynamic analysis of the TE voice provides insight into the physiological changes that occur at the level of the neoglottis with voice prosthesis in situ. The present study is a systematic review and meta-analysis of sub-neoglottic pressure (SNP) measurement in TE speakers by direct and indirect methods. The screening of abstracts and titles was carried out for inclusion of articles using 10 electronic databases spanning the period from 1979 to 2016. Ten articles which met the inclusion criteria were considered for meta-analysis with a pooled age range of 40-83 years. The pooled mean SNP obtained from the direct measurement method was 53.80 cm H2O with a 95% confidence interval of 21.14-86.46 cm H2O, while for the indirect measurement method, the mean SNP was 23.55 cm H2O with a 95% confidence interval of 19.23-27.87 cm H2O. Based on the literature review, the various procedures followed for direct and indirect measurements of SNP contributed to a range of differences in outcome measures. The meta-analysis revealed that the "interpolation method" for indirect estimation of SNP was the most acceptable and valid method in TE speakers. © 2017 S. Karger AG, Basel.
Barlow, Paul M.; Cunningham, William L.; Zhai, Tong; Gray, Mark
2015-01-01
This report is a user guide for the streamflow-hydrograph analysis methods provided with version 1.0 of the U.S. Geological Survey (USGS) Groundwater Toolbox computer program. These include six hydrograph-separation methods to determine the groundwater-discharge (base-flow) and surface-runoff components of streamflow—the Base-Flow Index (BFI; Standard and Modified), HYSEP (Fixed Interval, Sliding Interval, and Local Minimum), and PART methods—and the RORA recession-curve displacement method and associated RECESS program to estimate groundwater recharge from streamflow data. The Groundwater Toolbox is a customized interface built on the nonproprietary, open source MapWindow geographic information system software. The program provides graphing, mapping, and analysis capabilities in a Microsoft Windows computing environment. In addition to the four hydrograph-analysis methods, the Groundwater Toolbox allows for the retrieval of hydrologic time-series data (streamflow, groundwater levels, and precipitation) from the USGS National Water Information System, downloading of a suite of preprocessed geographic information system coverages and meteorological data from the National Oceanic and Atmospheric Administration National Climatic Data Center, and analysis of data with several preprocessing and postprocessing utilities. With its data retrieval and analysis tools, the Groundwater Toolbox provides methods to estimate many of the components of the water budget for a hydrologic basin, including precipitation; streamflow; base flow; runoff; groundwater recharge; and total, groundwater, and near-surface evapotranspiration.
Yin, Kedong; Wang, Pengyu; Li, Xuemei
2017-12-13
With respect to multi-attribute group decision-making (MAGDM) problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs) and the weights (including expert and attribute weight) are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.
Krstacic, Goran; Krstacic, Antonija; Smalcelj, Anton; Milicic, Davor; Jembrek-Gostovic, Mirjana
2007-04-01
Dynamic analysis techniques may quantify abnormalities in heart rate variability (HRV) based on nonlinear and fractal analysis (chaos theory). The article emphasizes clinical and prognostic significance of dynamic changes in short-time series applied on patients with coronary heart disease (CHD) during the exercise electrocardiograph (ECG) test. The subjects were included in the series after complete cardiovascular diagnostic data. Series of R-R and ST-T intervals were obtained from exercise ECG data after sampling digitally. The range rescaled analysis method determined the fractal dimension of the intervals. To quantify fractal long-range correlation's properties of heart rate variability, the detrended fluctuation analysis technique was used. Approximate entropy (ApEn) was applied to quantify the regularity and complexity of time series, as well as unpredictability of fluctuations in time series. It was found that the short-term fractal scaling exponent (alpha(1)) is significantly lower in patients with CHD (0.93 +/- 0.07 vs 1.09 +/- 0.04; P < 0.001). The patients with CHD had higher fractal dimension in each exercise test program separately, as well as in exercise program at all. ApEn was significant lower in CHD group in both RR and ST-T ECG intervals (P < 0.001). The nonlinear dynamic methods could have clinical and prognostic applicability also in short-time ECG series. Dynamic analysis based on chaos theory during the exercise ECG test point out the multifractal time series in CHD patients who loss normal fractal characteristics and regularity in HRV. Nonlinear analysis technique may complement traditional ECG analysis.
Using recurrence plot for determinism analysis of EEG recordings in genetic absence epilepsy rats.
Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A
2008-08-01
Understanding the transition of brain activity towards an absence seizure is a challenging task. In this paper, we use recurrence quantification analysis to indicate the deterministic dynamics of EEG series at the seizure-free, pre-seizure and seizure states in genetic absence epilepsy rats. The determinism measure, DET, based on recurrence plot, was applied to analyse these three EEG datasets, each dataset containing 300 single-channel EEG epochs of 5-s duration. Then, statistical analysis of the DET values in each dataset was carried out to determine whether their distributions over the three groups were significantly different. Furthermore, a surrogate technique was applied to calculate the significance level of determinism measures in EEG recordings. The mean (+/-SD) DET of EEG was 0.177+/-0.045 in pre-seizure intervals. The DET values of pre-seizure EEG data are significantly higher than those of seizure-free intervals, 0.123+/-0.023, (P<0.01), but lower than those of seizure intervals, 0.392+/-0.110, (P<0.01). Using surrogate data methods, the significance of determinism in EEG epochs was present in 25 of 300 (8.3%), 181 of 300 (60.3%) and 289 of 300 (96.3%) in seizure-free, pre-seizure and seizure intervals, respectively. Results provide some first indications that EEG epochs during pre-seizure intervals exhibit a higher degree of determinism than seizure-free EEG epochs, but lower than those in seizure EEG epochs in absence epilepsy. The proposed methods have the potential of detecting the transition between normal brain activity and the absence seizure state, thus opening up the possibility of intervention, whether electrical or pharmacological, to prevent the oncoming seizure.
Krishnan, Sunder Ram; Seelamantula, Chandra Sekhar; Bouwens, Arno; Leutenegger, Marcel; Lasser, Theo
2012-10-01
We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.
Wu, Zheng; Zeng, Li-bo; Wu, Qiong-shui
2016-02-01
The conventional cervical cancer screening methods mainly include TBS (the bethesda system) classification method and cellular DNA quantitative analysis, however, by using multiple staining method in one cell slide, which is staining the cytoplasm with Papanicolaou reagent and the nucleus with Feulgen reagent, the study of achieving both two methods in the cervical cancer screening at the same time is still blank. Because the difficulty of this multiple staining method is that the absorbance of the non-DNA material may interfere with the absorbance of DNA, so that this paper has set up a multi-spectral imaging system, and established an absorbance unmixing model by using multiple linear regression method based on absorbance's linear superposition character, and successfully stripped out the absorbance of DNA to run the DNA quantitative analysis, and achieved the perfect combination of those two kinds of conventional screening method. Through a series of experiment we have proved that between the absorbance of DNA which is calculated by the absorbance unmixxing model and the absorbance of DNA which is measured there is no significant difference in statistics when the test level is 1%, also the result of actual application has shown that there is no intersection between the confidence interval of the DNA index of the tetraploid cells which are screened by using this paper's analysis method when the confidence level is 99% and the DNA index's judging interval of cancer cells, so that the accuracy and feasibility of the quantitative DNA analysis with multiple staining method expounded by this paper have been verified, therefore this analytical method has a broad application prospect and considerable market potential in early diagnosis of cervical cancer and other cancers.
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Interpretable functional principal component analysis.
Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo
2016-09-01
Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.
Chu, Catherine J; Chan, Arthur; Song, Dan; Staley, Kevin J; Stufflebeam, Steven M; Kramer, Mark A
2017-02-01
High frequency oscillations are emerging as a clinically important indicator of epileptic networks. However, manual detection of these high frequency oscillations is difficult, time consuming, and subjective, especially in the scalp EEG, thus hindering further clinical exploration and application. Semi-automated detection methods augment manual detection by reducing inspection to a subset of time intervals. We propose a new method to detect high frequency oscillations that co-occur with interictal epileptiform discharges. The new method proceeds in two steps. The first step identifies candidate time intervals during which high frequency activity is increased. The second step computes a set of seven features for each candidate interval. These features require that the candidate event contain a high frequency oscillation approximately sinusoidal in shape, with at least three cycles, that co-occurs with a large amplitude discharge. Candidate events that satisfy these features are stored for validation through visual analysis. We evaluate the detector performance in simulation and on ten examples of scalp EEG data, and show that the proposed method successfully detects spike-ripple events, with high positive predictive value, low false positive rate, and high intra-rater reliability. The proposed method is less sensitive than the existing method of visual inspection, but much faster and much more reliable. Accurate and rapid detection of high frequency activity increases the clinical viability of this rhythmic biomarker of epilepsy. The proposed spike-ripple detector rapidly identifies candidate spike-ripple events, thus making clinical analysis of prolonged, multielectrode scalp EEG recordings tractable. Copyright © 2016 Elsevier B.V. All rights reserved.
Nonparametric methods in actigraphy: An update
Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.
2014-01-01
Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921
Methods of measurement signal acquisition from the rotational flow meter for frequency analysis
NASA Astrophysics Data System (ADS)
Świsulski, Dariusz; Hanus, Robert; Zych, Marcin; Petryka, Leszek
One of the simplest and commonly used instruments for measuring the flow of homogeneous substances is the rotational flow meter. The main part of such a device is a rotor (vane or screw) rotating at a speed which is the function of the fluid or gas flow rate. A pulse signal with a frequency proportional to the speed of the rotor is obtained at the sensor output. For measurements in dynamic conditions, a variable interval between pulses prohibits the analysis of the measuring signal. Therefore, the authors of the article developed a method involving the determination of measured values on the basis of the last inter-pulse interval preceding the moment designated by the timing generator. For larger changes of the measured value at a predetermined time, the value can be determined by means of extrapolation of the two adjacent interpulse ranges, assuming a linear change in the flow. The proposed methods allow analysis which requires constant spacing between measurements, allowing for an analysis of the dynamics of changes in the test flow, eg. using a Fourier transform. To present the advantages of these methods simulations of flow measurement were carried out with a DRH-1140 rotor flow meter from the company Kobold.
Frequency analysis of uncertain structures using imprecise probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modares, Mehdi; Bergerson, Joshua
2015-01-01
Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
Meta-analysis of few small studies in orphan diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-03-01
Meta-analyses in orphan diseases and small populations generally face particular problems, including small numbers of studies, small study sizes and heterogeneity of results. However, the heterogeneity is difficult to estimate if only very few studies are included. Motivated by a systematic review in immunosuppression following liver transplantation in children, we investigate the properties of a range of commonly used frequentist and Bayesian procedures in simulation studies. Furthermore, the consequences for interval estimation of the common treatment effect in random-effects meta-analysis are assessed. The Bayesian credibility intervals using weakly informative priors for the between-trial heterogeneity exhibited coverage probabilities in excess of the nominal level for a range of scenarios considered. However, they tended to be shorter than those obtained by the Knapp-Hartung method, which were also conservative. In contrast, methods based on normal quantiles exhibited coverages well below the nominal levels in many scenarios. With very few studies, the performance of the Bayesian credibility intervals is of course sensitive to the specification of the prior for the between-trial heterogeneity. In conclusion, the use of weakly informative priors as exemplified by half-normal priors (with a scale of 0.5 or 1.0) for log odds ratios is recommended for applications in rare diseases. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
Reference Intervals of Hematology and Clinical Chemistry Analytes for 1-Year-Old Korean Children.
Lee, Hye Ryun; Shin, Sue; Yoon, Jong Hyun; Roh, Eun Youn; Chang, Ju Young
2016-09-01
Reference intervals need to be established according to age. We established reference intervals of hematology and chemistry from community-based healthy 1-yr-old children and analyzed their iron status according to the feeding methods during the first six months after birth. A total of 887 children who received a medical check-up between 2010 and 2014 at Boramae Hospital (Seoul, Korea) were enrolled. A total of 534 children (247 boys and 287 girls) were enrolled as reference individuals after the exclusion of data obtained from children with suspected iron deficiency. Hematology and clinical chemistry analytes were measured, and the reference value of each analyte was estimated by using parametric (mean±2 SD) or nonparametric methods (2.5-97.5th percentile). Iron, total iron-binding capacity, and ferritin were measured, and transferrin saturation was calculated. As there were no differences in the mean values between boys and girls, we established the reference intervals for 1-yr-old children regardless of sex. The analysis of serum iron status according to feeding methods during the first six months revealed higher iron, ferritin, and transferrin saturation levels in children exclusively or mainly fed formula than in children exclusively or mainly fed breast milk. We established reference intervals of hematology and clinical chemistry analytes from community-based healthy children at one year of age. These reference intervals will be useful for interpreting results of medical check-ups at one year of age.
Analysis method for Thomson scattering diagnostics in GAMMA 10/PDX.
Ohta, K; Yoshikawa, M; Yasuhara, R; Chikatsu, M; Shima, Y; Kohagura, J; Sakamoto, M; Nakasima, Y; Imai, T; Ichimura, M; Yamada, I; Funaba, H; Minami, T
2016-11-01
We have developed an analysis method to improve the accuracies of electron temperature measurement by employing a fitting technique for the raw Thomson scattering (TS) signals. Least square fitting of the raw TS signals enabled reduction of the error in the electron temperature measurement. We applied the analysis method to a multi-pass (MP) TS system. Because the interval between the MPTS signals is very short, it is difficult to separately analyze each Thomson scattering signal intensity by using the raw signals. We used the fitting method to obtain the original TS scattering signals from the measured raw MPTS signals to obtain the electron temperatures in each pass.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
GIGGLE: a search engine for large-scale integrated genome analysis.
Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R
2018-02-01
GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.
GIGGLE: a search engine for large-scale integrated genome analysis
Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R
2018-01-01
GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061
Au-yeung, Wan-tai M.; Reinhall, Per; Poole, Jeanne E.; Anderson, Jill; Johnson, George; Fletcher, Ross D.; Moore, Hans J.; Mark, Daniel B.; Lee, Kerry L.; Bardy, Gust H.
2015-01-01
Background In the SCD-HeFT a significant fraction of the congestive heart failure (CHF) patients ultimately did not die suddenly from arrhythmic causes. CHF patients will benefit from better tools to identify if ICD therapy is needed. Objective To identify predictor variables from baseline SCD-HeFT patients’ RR intervals that correlate to arrhythmic sudden cardiac death (SCD) and mortality and to design an ICD therapy screening test. Methods Ten predictor variables were extracted from pre-randomization Holter data from 475 patients enrolled in the SCD-HeFT ICD arm using novel and traditional heart rate variability methods. All variables were correlated to SCD using Mann Whitney-Wilcoxon test and receiver operating characteristic analysis. ICD therapy screening tests were designed by minimizing the cost of false classifications. Survival analysis, including log-rank test and Cox models, was also performed. Results α1 and α2 from detrended fluctuation analysis, the ratio of low to high frequency power, the number of PVCs per hour and heart rate turbulence slope are all statistically significant for predicting the occurrences of SCD (p<0.001) and survival (log-rank p<0.01). The most powerful multivariate predictor tool using the Cox Proportional Hazards was α2 with a hazard ratio of 0.0465 (95% CI: 0.00528 – 0.409, p<0.01). Conclusion Predictor variables from RR intervals correlate to the occurrences of SCD and distinguish survival among SCD-HeFT ICD patients. We believe SCD prediction models should incorporate Holter based RR interval analysis to refine ICD patient selection especially in removing patients who are unlikely to benefit from ICD therapy. PMID:26096609
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
A critique of the usefulness of inferential statistics in applied behavior analysis
Hopkins, B. L.; Cole, Brian L.; Mason, Tina L.
1998-01-01
Researchers continue to recommend that applied behavior analysts use inferential statistics in making decisions about effects of independent variables on dependent variables. In many other approaches to behavioral science, inferential statistics are the primary means for deciding the importance of effects. Several possible uses of inferential statistics are considered. Rather than being an objective means for making decisions about effects, as is often claimed, inferential statistics are shown to be subjective. It is argued that the use of inferential statistics adds nothing to the complex and admittedly subjective nonstatistical methods that are often employed in applied behavior analysis. Attacks on inferential statistics that are being made, perhaps with increasing frequency, by those who are not behavior analysts, are discussed. These attackers are calling for banning the use of inferential statistics in research publications and commonly recommend that behavioral scientists should switch to using statistics aimed at interval estimation or the method of confidence intervals. Interval estimation is shown to be contrary to the fundamental assumption of behavior analysis that only individuals behave. It is recommended that authors who wish to publish the results of inferential statistics be asked to justify them as a means for helping us to identify any ways in which they may be useful. PMID:22478304
Statistical physics and physiology: monofractal and multifractal approaches
NASA Technical Reports Server (NTRS)
Stanley, H. E.; Amaral, L. A.; Goldberger, A. L.; Havlin, S.; Peng, C. K.
1999-01-01
Even under healthy, basal conditions, physiologic systems show erratic fluctuations resembling those found in dynamical systems driven away from a single equilibrium state. Do such "nonequilibrium" fluctuations simply reflect the fact that physiologic systems are being constantly perturbed by external and intrinsic noise? Or, do these fluctuations actually, contain useful, "hidden" information about the underlying nonequilibrium control mechanisms? We report some recent attempts to understand the dynamics of complex physiologic fluctuations by adapting and extending concepts and methods developed very recently in statistical physics. Specifically, we focus on interbeat interval variability as an important quantity to help elucidate possibly non-homeostatic physiologic variability because (i) the heart rate is under direct neuroautonomic control, (ii) interbeat interval variability is readily measured by noninvasive means, and (iii) analysis of these heart rate dynamics may provide important practical diagnostic and prognostic information not obtainable with current approaches. The analytic tools we discuss may be used on a wider range of physiologic signals. We first review recent progress using two analysis methods--detrended fluctuation analysis and wavelets--sufficient for quantifying monofractual structures. We then describe recent work that quantifies multifractal features of interbeat interval series, and the discovery that the multifractal structure of healthy subjects is different than that of diseased subjects.
Aguilera Eguía, Raúl Alberto; Russell Guzmán, Javier Antonio; Soto Muñoz, Marcelo Enrique; Villegas González, Bastián Eduardo; Poblete Aro, Carlos Emilio; Ibacache Palma, Alejandro
2015-03-05
Type 2 diabetes mellitus is one of the major non-communicable chronic diseases in the world. Its prevalence in Chile is significant, and complications associated with this disease involve great costs, which is why prevention and treatment of this condition are essential. Physical exercise is an effective means for prevention and treatment of type 2 diabetes mellitus. The emergence of new forms of physical training, such as "high intensity interval training", presents novel therapeutic alternatives for patients and health care professionals. To assess the validity and applicability of the results regarding the effectiveness of high intensity interval training in reducing glycosylated hemoglobin in adult patients with type 2 diabetes mellitus and answer the following question: In subjects with type 2 diabetes, can the method of high intensity interval training compared to moderate intensity exercise decrease glycosylated hemoglobin? We performed a critical analysis of the article "Feasibility and preliminary effectiveness of high intensity interval training in type 2 diabetes". We found no significant differences in the amount of glycosylated hemoglobin between groups of high intensity interval training and moderate-intensity exercise upon completion of the study (p>0.05). In adult patients with type 2 diabetes mellitus, high intensity interval training does not significantly improve glycosylated hemoglobin levels. Despite this, the high intensity interval training method shows as much improvement in body composition and physical condition as the moderate intensity exercise program.
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
Systems of fuzzy equations in structural mechanics
NASA Astrophysics Data System (ADS)
Skalna, Iwona; Rama Rao, M. V.; Pownuk, Andrzej
2008-08-01
Systems of linear and nonlinear equations with fuzzy parameters are relevant to many practical problems arising in structure mechanics, electrical engineering, finance, economics and physics. In this paper three methods for solving such equations are discussed: method for outer interval solution of systems of linear equations depending linearly on interval parameters, fuzzy finite element method proposed by Rama Rao and sensitivity analysis method. The performance and advantages of presented methods are described with illustrative examples. Extended version of the present paper can be downloaded from the web page of the UTEP [I. Skalna, M.V. Rama Rao, A. Pownuk, Systems of fuzzy equations in structural mechanics, The University of Texas at El Paso, Department of Mathematical Sciences Research Reports Series,
Kishore, Amit; Vail, Andy; Majid, Arshad; Dawson, Jesse; Lees, Kennedy R; Tyrrell, Pippa J; Smith, Craig J
2014-02-01
Atrial fibrillation (AF) confers a high risk of recurrent stroke, although detection methods and definitions of paroxysmal AF during screening vary. We therefore undertook a systematic review and meta-analysis to determine the frequency of newly detected AF using noninvasive or invasive cardiac monitoring after ischemic stroke or transient ischemic attack. Prospective observational studies or randomized controlled trials of patients with ischemic stroke, transient ischemic attack, or both, who underwent any cardiac monitoring for a minimum of 12 hours, were included after electronic searches of multiple databases. The primary outcome was detection of any new AF during the monitoring period. We prespecified subgroup analysis of selected (prescreened or cryptogenic) versus unselected patients and according to duration of monitoring. A total of 32 studies were analyzed. The overall detection rate of any AF was 11.5% (95% confidence interval, 8.9%-14.3%), although the timing, duration, method of monitoring, and reporting of diagnostic criteria used for paroxysmal AF varied. Detection rates were higher in selected (13.4%; 95% confidence interval, 9.0%-18.4%) than in unselected patients (6.2%; 95% confidence interval, 4.4%-8.3%). There was substantial heterogeneity even within specified subgroups. Detection of AF was highly variable, and the review was limited by small sample sizes and marked heterogeneity. Further studies are required to inform patient selection, optimal timing, methods, and duration of monitoring for detection of AF/paroxysmal AF.
McGrath, Trevor A; McInnes, Matthew D F; Korevaar, Daniël A; Bossuyt, Patrick M M
2016-10-01
Purpose To determine whether authors of systematic reviews of diagnostic accuracy studies published in imaging journals used recommended methods for meta-analysis, and to evaluate the effect of traditional methods on summary estimates of sensitivity and specificity. Materials and Methods Medline was searched for published systematic reviews that included meta-analysis of test accuracy data limited to imaging journals published from January 2005 to May 2015. Two reviewers independently extracted study data and classified methods for meta-analysis as traditional (univariate fixed- or random-effects pooling or summary receiver operating characteristic curve) or recommended (bivariate model or hierarchic summary receiver operating characteristic curve). Use of methods was analyzed for variation with time, geographical location, subspecialty, and journal. Results from reviews in which study authors used traditional univariate pooling methods were recalculated with a bivariate model. Results Three hundred reviews met the inclusion criteria, and in 118 (39%) of those, authors used recommended meta-analysis methods. No change in the method used was observed with time (r = 0.54, P = .09); however, there was geographic (χ(2) = 15.7, P = .001), subspecialty (χ(2) = 46.7, P < .001), and journal (χ(2) = 27.6, P < .001) heterogeneity. Fifty-one univariate random-effects meta-analyses were reanalyzed with the bivariate model; the average change in the summary estimate was -1.4% (P < .001) for sensitivity and -2.5% (P < .001) for specificity. The average change in width of the confidence interval was 7.7% (P < .001) for sensitivity and 9.9% (P ≤ .001) for specificity. Conclusion Recommended methods for meta-analysis of diagnostic accuracy in imaging journals are used in a minority of reviews; this has not changed significantly with time. Traditional (univariate) methods allow overestimation of diagnostic accuracy and provide narrower confidence intervals than do recommended (bivariate) methods. (©) RSNA, 2016 Online supplemental material is available for this article.
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín
2010-01-01
The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
NASA Astrophysics Data System (ADS)
Makovetskii, A. N.; Tabatchikova, T. I.; Yakovleva, I. L.; Tereshchenko, N. A.; Mirzaev, D. A.
2013-06-01
The decomposition kinetics of austenite that appears in the 13KhFA low-alloyed pipe steel upon heating the samples in an intercritical temperature interval (ICI) and exposure for 5 or 30 min has been studied by the method of high-speed dilatometry. The results of dilatometry are supplemented by the microstructure analysis. Thermokinetic diagrams of the decomposition of the γ phase are represented. The conclusion has been drawn that an increase in the duration of exposure in the intercritical interval leads to a significant increase in the stability of the γ phase.
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia; Li, Qingchen
2017-04-01
This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies
Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.
Erdoğan, Semra; Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.
Pneumatic testing in 45-degree-inclined boreholes in ash-flow tuff near Superior, Arizona
LeCain, G.D.
1995-01-01
Matrix permeability values determined by single-hole pneumatic testing in nonfractured ash-flow tuff ranged from 5.1 to 20.3 * 1046 m2 (meters squared), depending on the gas-injection rate and analysis method used. Results from the single-hole tests showed several significant correlations between permeability and injection rate and between permeability and test order. Fracture permeability values determined by cross-hole pneumatic testing in fractured ash-flow tuff ranged from 0.81 to 3.49 * 1044 m2, depending on injection rate and analysis method used. Results from the cross-hole tests monitor intervals showed no significant correlation between permeability and injection rate; however, results from the injection interval showed a significant correlation between injection rate and permeability. Porosity estimates from the 'cross-hole testing range from 0.8 to 2.0 percent. The maximum temperature change associated with the pneumatic testing was 1.2'(2 measured in the injection interval during cross-hole testing. The maximum temperature change in the guard and monitor intervals was O.Ip C. The maximum error introduced into the permeability values due to temperature fluctuations is approximately 4 percent. Data from temperature monitoring in the borehole indicated a positive correlation between the temperature decrease in the injection interval during recovery testing and the gas-injection rate. The thermocouple psychrometers indicated that water vapor was condensing in the boreholes during testing. The psychrometers in the guard and monitor intervals detected the drier injected gas as an increase in the dry bulb reading. The relative humidity in the test intervals was always higher than the upper measurement limit of the psychrometers. Although the installation of the packer system may have altered the water balance of the borehole, the gas-injection testing resulted in minimal or no changes in the borehole relative humidity.
Unplanned pregnancy: does past experience influence the use of a contraceptive method?
Matteson, Kristen A; Peipert, Jeffrey F; Allsworth, Jenifer; Phipps, Maureen G; Redding, Colleen A
2006-01-01
To investigate whether women between the ages of 14 and 25 years with a past unplanned pregnancy were more likely to use a contraceptive method compared with women without a history of unplanned pregnancy. We analyzed baseline data of 424 nonpregnant women between the ages of 14 and 25 years enrolled in a randomized trial to prevent sexually transmitted diseases and unplanned pregnancy (Project PROTECT). Women at high risk for sexually transmitted diseases or unplanned pregnancy were included. Participants completed a demographic, substance use, and reproductive health questionnaire. We compared women with and without a history of unplanned pregnancy using bivariate analysis and log binomial regression. The prevalence of past unplanned pregnancy in this sample was 43%. Women reporting an unplanned pregnancy were older, and had less education, and were more likely to be nonwhite race or ethnicity. History of an unplanned pregnancy was not associated with usage of a contraceptive method (relative risk 1.01, 95% confidence interval 0.87-1.16) in bivariate analysis or when potential confounders were accounted for in the analysis (adjusted relative risk 1.10, 95% confidence interval 0.95-1.28). Several factors were associated with both unplanned pregnancy and overall contraceptive method use in this population. However, a past unplanned pregnancy was not associated with overall contraceptive method usage. Future studies are necessary to investigate the complex relationship between unplanned pregnancy and contraceptive method use. II-2.
Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2010-01-01
A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…
Chládková, Jirina; Havlínová, Zuzana; Chyba, Tomás; Krcmová, Irena; Chládek, Jaroslav
2008-11-01
Current guidelines recommend the single-breath measurement of fractional concentration of exhaled nitric oxide (FE(NO)) at the expiratory flow rate of 50 mL/s as a gold standard. The time profile of exhaled FE(NO) consists of a washout phase followed by a plateau phase with a stable concentration. This study performed measurements of FE(NO) using a chemiluminescence analyzer Ecomedics CLD88sp and an electrochemical monitor NIOX MINO in 82 children and adolescents (44 males) from 4.9 to 18.7 years of age with corticosteroid-treated allergic rhinitis (N = 58) and/or asthma (N = 59). Duration of exhalation was 6 seconds for children less than 12 years of age and 10 seconds for older children. The first aim was to compare the evaluation of FE(NO)-time profiles from Ecomedics by its software in fixed intervals of 7 to 10 seconds (older children) and 2 to 4 seconds (younger children) since the start of exhalation (method A) with the guideline-based analysis of plateau concentrations at variable time intervals (method B). The second aim was to assess the between-analyzer agreement. In children over 12 years of age, the median ratio of FE(NO) concentrations of 1.00 (95% CI: 0.99-1.02) indicated an excellent agreement between the methods A and B. Compared with NIOX MINO, the Ecomedics results were higher by 11% (95% CI: 1-22) (method A) and 14% (95% CI: 4-26) (method B), respectively. In children less than 12 years of age, the FE(NO) concentrations obtained by the method B were 34% (95% CI: 21-48) higher and more reproducible (p < 0.02) compared to the method A. The Ecomedics results of the method A were 11% lower (95% CI: 2-20) than NIOX MINO concentrations while the method B gave 21% higher concentrations (95% CI: 9-35). We conclude that in children less than 12 years of age, the guideline-based analysis of FE(NO)-time profiles from Ecomedics at variable times obtains FE(NO) concentrations that are higher and more reproducible than those from the fixed interval of 2 to 4 seconds and higher than NIOX MINO concentrations obtained during a short exhalation (6 seconds). The Ecomedics FE(NO) concentrations of children more than 12 years of age calculated in the interval of 7 to 10 seconds represent plateau values and agree well with NIOX MINO results obtained during a standard 10-second exhalation.
Henschel, Volkmar; Engel, Jutta; Hölzel, Dieter; Mansmann, Ulrich
2009-02-10
Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.
Sun, J
1995-09-01
In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.
Pagès, Pierre-Benoit; Delpy, Jean-Philippe; Orsini, Bastien; Gossot, Dominique; Baste, Jean-Marc; Thomas, Pascal; Dahan, Marcel; Bernard, Alain
2016-04-01
Video-assisted thoracoscopic surgery (VATS) lobectomy has recently become the recommended approach for stage I non-small cell lung cancer. However, these guidelines are not based on any large randomized control trial. Our study used propensity scores and a sensitivity analysis to compare VATS lobectomy with open thoracotomy. From 2005 to 2012, 24,811 patients (95.1%) were operated on by open thoracotomy and 1,278 (4.9%) by VATS. The end points were 30-day postoperative death, postoperative complications, hospital stay, overall survival, and disease-free survival. Two propensity scores analyses were performed: matching and inverse probability of treatment weighting, and one sensitivity analysis to unmask potential hidden bias. A subgroup analysis was performed to compare "high-risk" with "low-risk" patients. Results are reported by odds ratios or hazard ratios and their 95% confidence intervals. Postoperative death was not significantly reduced by VATS whatever the analysis. Concerning postoperative complications, VATS significantly decreased the occurrence of atelectasis and pneumopathy with both analysis methods, but there were no differences in the occurrence of other postoperative complications. VATS did not provide a benefit for high-risk patients. The VATS approach decreased the hospital length of stay from 2.4 days (95% confidence interval, -1.7 to -3 days) to -4.68 days (95% confidence interval, -8.5 to 0.9 days). Overall survival and disease-free survival were not influenced by the surgical approach. The sensitivity analysis showed potential biases. The results must be interpreted carefully because of the differences observed according to the propensity scores method used. A multicenter randomized controlled trial is necessary to limit the biases. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
Automatic zebrafish heartbeat detection and analysis for zebrafish embryos.
Pylatiuk, Christian; Sanchez, Daniela; Mikut, Ralf; Alshut, Rüdiger; Reischl, Markus; Hirth, Sofia; Rottbauer, Wolfgang; Just, Steffen
2014-08-01
A fully automatic detection and analysis method of heartbeats in videos of nonfixed and nonanesthetized zebrafish embryos is presented. This method reduces the manual workload and time needed for preparation and imaging of the zebrafish embryos, as well as for evaluating heartbeat parameters such as frequency, beat-to-beat intervals, and arrhythmicity. The method is validated by a comparison of the results from automatic and manual detection of the heart rates of wild-type zebrafish embryos 36-120 h postfertilization and of embryonic hearts with bradycardia and pauses in the cardiac contraction.
Sysa-Shah, Polina; Sørensen, Lars L; Abraham, M Roselle; Gabrielson, Kathleen L
2015-01-01
Electrocardiography is an important method for evaluation and risk stratification of patients with cardiac hypertrophy. We hypothesized that the recently developed transgenic mouse model of cardiac hypertrophy (ErbB2tg) will display distinct ECG features, enabling WT (wild type) mice to be distinguished from transgenic mice without using conventional PCR genotyping. We evaluated more than 2000 mice and developed specific criteria for genotype determination by using cageside ECG, during which unanesthetized mice were manually restrained for less than 1 min. Compared with those from WT counterparts, the ECG recordings of ErbB2tg mice were characterized by higher P- and R-wave amplitudes, broader QRS complexes, inverted T waves, and ST interval depression. Pearson's correlation matrix analysis of combined WT and ErbB2tg data revealed significant correlation between heart weight and the ECG parameters of QT interval (corrected for heart rate), QRS interval, ST height, R amplitude, P amplitude, and PR interval. In addition, the left ventricular posterior wall thickness as determined by echocardiography correlated with ECG-determined ST height, R amplitude, QRS interval; echocardiographic left ventricular mass correlated with ECG-determined ST height and PR interval. In summary, we have determined phenotypic ECG criteria to differentiate ErbB2tg from WT genotypes in 98.8% of mice. This inexpensive and time-efficient ECG-based phenotypic method might be applied to differentiate between genotypes in other rodent models of cardiac hypertrophy. Furthermore, with appropriate modifications, this method might be translated for use in other species. PMID:26310459
Relaxation estimation of RMSD in molecular dynamics immunosimulations.
Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena
2012-01-01
Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of "lagged RMSD-analysis" as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged.
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
Baxter, Suzanne Domel; Hitchcock, David B; Guinn, Caroline H; Royer, Julie A; Wilson, Dawn K; Pate, Russell R; McIver, Kerry L; Dowda, Marsha
2013-01-01
Objective Investigate differences in dietary recall accuracy by interview content (diet-only; diet-and-physical-activity), retention interval (same-day; previous-day), and grade (3rd; 5th). Methods Thirty-two children observed eating school-provided meals and interviewed once each; interview content and retention interval randomly assigned. Multivariate analysis of variance on rates for omissions (foods observed but unreported) and intrusions (foods reported but unobserved); independent variables—interview content, retention interval, grade. Results Accuracy differed by retention interval (P = .05; better for same-day [omission rate, intrusion rate: 28%, 20%] than previous-day [54%, 45%]) but not interview content (P > .48; diet-only: 41%, 33%; diet-and-physical-activity: 41%, 33%) or grade (P > .27; 3rd: 48%, 42%; 5th: 34%, 24%). Conclusions and Implications Although the small sample limits firm conclusions, results provide evidence-based direction to enhance accuracy; specifically, to shorten the retention interval. Larger validation studies need to investigate the combined effect of interview content, retention interval, and grade on accuracy. PMID:23562487
Lott, B.; Escande, L.; Larsson, S.; ...
2012-07-19
Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LATmore » analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.« less
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
NASA Astrophysics Data System (ADS)
Gorshkov, A. M.; Kudryashova, L. K.; Lee-Van-Khe, O. S.
2016-09-01
The article presents the results of studying petrophysical rock properties of the Bazhenov Formation of the South-Eastern part of Kaymysovsky Vault with the Gas Research Institute (GRI) method. The authors have constructed dependence charts for bulk and grain density, open porosity and matrix permeability vs. depth. The results of studying petrophysical properties with the GRI method and core description have allowed dividing the entire section into three intervals each of which characterized by different conditions of Bazhenov Formation rock formation. The authors have determined a correlation between the compensated neutron log and the rock density vs. depth chart on the basis of complex well logging and petrophysical section analysis. They have determined a promising interval for producing hydrocarbons from the Bazhenov Formation in the well under study. Besides, they have determined the typical behavior of compensated neutron logs and SP logs on well logs for this interval. These studies will allow re-interpreting available well logs in order to determine the most promising interval to be involved in Bazhenov Formation development in Tomsk Region.
Differential Decomposition Among Pig, Rabbit, and Human Remains.
Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe
2018-03-30
While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.
Myers, Adrianne L; Williams, Regan F; Giles, Kim; Waters, Teresa M; Eubanks, James W; Hixson, S Douglas; Huang, Eunice Y; Langham, Max R; Blakely, Martin L
2012-04-01
The methods of surgical care for children with perforated appendicitis are controversial. Some surgeons prefer early appendectomy; others prefer initial nonoperative management followed by interval appendectomy. Determining which of these two therapies is most cost-effective was the goal of this study. We conducted a prospective, randomized trial in children with a preoperative diagnosis of perforated appendicitis. Patients were randomized to early or interval appendectomy. Overall hospital costs were extracted from the hospital's internal cost accounting system and the two treatment groups were compared using an intention-to-treat analysis. Nonparametric data were reported as median ± standard deviation (or range) and compared using a Wilcoxon rank sum test. One hundred thirty-one patients were randomized to either early (n = 64) or interval (n = 67) appendectomy. Hospital charges and costs were significantly lower in patients randomized to early appendectomy. Total median hospital costs were $17,450 (range $7,020 to $55,993) for patients treated with early appendectomy vs $22,518 (range $4,722 to $135,338) for those in the interval appendectomy group. Median hospital costs more than doubled in patients who experienced an adverse event ($15,245 vs $35,391, p < 0.0001). Unplanned readmissions also increased costs significantly and were more frequent in patients randomized to interval appendectomy. In a prospective randomized trial, hospital charges and costs were significantly lower for early appendectomy when compared with interval appendectomy. The increased costs were related primarily to the significant increase in adverse events, including unplanned readmissions, seen in the interval appendectomy group. Copyright © 2012. Published by Elsevier Inc.
Temporal Comparisons of Internet Topology
2014-06-01
Number CAIDA Cooperative Association of Internet Data Analysis CDN Content Delivery Network CI Confidence Interval DoS denial of service GMT Greenwich...the CAIDA data. Our methods include analysis of graph theoretical measures as well as complex network and statistical measures that will quantify the...tool that probes the Internet for topology analysis and performance [26]. Scamper uses network diagnostic tools, such as traceroute and ping, to probe
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
Data series embedding and scale invariant statistics.
Michieli, I; Medved, B; Ristov, S
2010-06-01
Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
Mattos, A Z; Mattos, A A
Many different non-invasive methods have been studied with the purpose of staging liver fibrosis. The objective of this study was verifying if transient elastography is superior to aspartate aminotransferase to platelet ratio index for staging fibrosis in patients with chronic hepatitis C. A systematic review with meta-analysis of studies which evaluated both non-invasive tests and used biopsy as the reference standard was performed. A random-effects model was used, anticipating heterogeneity among studies. Diagnostic odds ratio was the main effect measure, and summary receiver operating characteristic curves were created. A sensitivity analysis was planned, in which the meta-analysis would be repeated excluding each study at a time. Eight studies were included in the meta-analysis. Regarding the prediction of significant fibrosis, transient elastography and aspartate aminotransferase to platelet ratio index had diagnostic odds ratios of 11.70 (95% confidence interval = 7.13-19.21) and 8.56 (95% confidence interval = 4.90-14.94) respectively. Concerning the prediction of cirrhosis, transient elastography and aspartate aminotransferase to platelet ratio index had diagnostic odds ratios of 66.49 (95% confidence interval = 23.71-186.48) and 7.47 (95% confidence interval = 4.88-11.43) respectively. In conclusion, there was no evidence of significant superiority of transient elastography over aspartate aminotransferase to platelet ratio index regarding the prediction of significant fibrosis, but the former proved to be better than the latter concerning prediction of cirrhosis.
Reinders, Jörn; Sonntag, Robert; Kretzer, Jan Philippe
2014-11-01
Polyethylene wear (PE) is known to be a limiting factor in total joint replacements. However, a standardized wear test (e.g. ISO standard) can only replicate the complex in vivo loading condition in a simplified form. In this study, two different parameters were analyzed: (a) Bovine serum, as a substitute for synovial fluid, is typically replaced every 500,000 cycles. However, a continuous regeneration takes place in vivo. How does serum-replacement interval affect the wear rate of total knee replacements? (b) Patients with an artificial joint show reduced gait frequencies compared to standardized testing. What is the influence of a reduced frequency? Three knee wear tests were run: (a) reference test (ISO), (b) testing with a shortened lubricant replacement interval, (c) testing with reduced frequency. The wear behavior was determined based on gravimetric measurements and wear particle analysis. The results showed that the reduced test frequency only had a small effect on wear behavior. Testing with 1 Hz frequency is therefore a valid method for wear testing. However, testing with a shortened replacement interval nearly doubled the wear rate. Wear particle analysis revealed only small differences in wear particle size between the different tests. Wear particles were not linearly released within one replacement interval. The ISO standard should be revised to address the marked effects of lubricant replacement interval on wear rate.
Iyoke, Ca; Ezugwu, Fo; Lawani, Ol; Ugwu, Go; Ajah, Lo; Mba, Sg
2014-01-01
To describe the methods preferred for contraception, evaluate preferences and adherence to modern contraceptive methods, and determine the factors associated with contraceptive choices among tertiary students in South East Nigeria. A questionnaire-based cross-sectional study of sexual habits, knowledge of contraceptive methods, and patterns of contraceptive choices among a pooled sample of unmarried students from the three largest tertiary educational institutions in Enugu city, Nigeria was done. Statistical analysis involved descriptive and inferential statistics at the 95% level of confidence. A total of 313 unmarried students were studied (194 males; 119 females). Their mean age was 22.5±5.1 years. Over 98% of males and 85% of females made their contraceptive choices based on information from peers. Preferences for contraceptive methods among female students were 49.2% for traditional methods of contraception, 28% for modern methods, 10% for nonpharmacological agents, and 8% for off-label drugs. Adherence to modern contraceptives among female students was 35%. Among male students, the preference for the male condom was 45.2% and the adherence to condom use was 21.7%. Multivariate analysis showed that receiving information from health personnel/media/workshops (odds ratio 9.54, 95% confidence interval 3.5-26.3), health science-related course of study (odds ratio 3.5, 95% confidence interval 1.3-9.6), and previous sexual exposure prior to university admission (odds ratio 3.48, 95% confidence interval 1.5-8.0) all increased the likelihood of adherence to modern contraceptive methods. An overwhelming reliance on peers for contraceptive information in the context of poor knowledge of modern methods of contraception among young people could have contributed to the low preferences and adherence to modern contraceptive methods among students in tertiary educational institutions. Programs to reduce risky sexual behavior among these students may need to focus on increasing the content and adequacy of contraceptive information held by people through regular health worker-led, on-campus workshops.
Classical Item Analysis Using Latent Variable Modeling: A Note on a Direct Evaluation Procedure
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2011-01-01
A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…
Cores Of Recurrent Events (CORE) | Informatics Technology for Cancer Research (ITCR)
CORE is a statistically supported computational method for finding recurrently targeted regions in massive collections of genomic intervals, such as those arising from DNA copy number analysis of single tumor cells or bulk tumor tissues.
Survival Analysis of Patients with Interval Cancer Undergoing Gastric Cancer Screening by Endoscopy
Hamashima, Chisato; Shabana, Michiko; Okamoto, Mikizo; Osaki, Yoneatsu; Kishimoto, Takuji
2015-01-01
Aims Interval cancer is a key factor that influences the effectiveness of a cancer screening program. To evaluate the impact of interval cancer on the effectiveness of endoscopic screening, the survival rates of patients with interval cancer were analyzed. Methods We performed gastric cancer-specific and all-causes survival analyses of patients with screen-detected cancer and patients with interval cancer in the endoscopic screening group and radiographic screening group using the Kaplan-Meier method. Since the screening interval was 1 year, interval cancer was defined as gastric cancer detected within 1 year after a negative result. A Cox proportional hazards model was used to investigate the risk factors associated with gastric cancer-specific and all-causes death. Results A total of 1,493 gastric cancer patients (endoscopic screening group: n = 347; radiographic screening group: n = 166; outpatient group: n = 980) were identified from the Tottori Cancer Registry from 2001 to 2008. The gastric cancer-specific survival rates were higher in the endoscopic screening group than in the radiographic screening group and the outpatients group. In the endoscopic screening group, the gastric cancer-specific survival rate of the patients with screen-detected cancer and the patients with interval cancer were nearly equal (P = 0.869). In the radiographic screening group, the gastric cancer-specific survival rate of the patients with screen-detected cancer was higher than that of the patients with interval cancer (P = 0.009). For gastric cancer-specific death, the hazard ratio of interval cancer in the endoscopic screening group was 0.216 for gastric cancer death (95%CI: 0.054-0.868) compared with the outpatient group. Conclusion The survival rate and the risk of gastric cancer death among the patients with screen-detected cancer and patients with interval cancer were not significantly different in the annual endoscopic screening. These results suggest the potential of endoscopic screening in reducing mortality from gastric cancer. PMID:26023768
A contour for the entanglement entropies in harmonic lattices
NASA Astrophysics Data System (ADS)
Coser, Andrea; De Nobili, Cristiano; Tonni, Erik
2017-08-01
We construct a contour function for the entanglement entropies in generic harmonic lattices. In one spatial dimension, numerical analysis are performed by considering harmonic chains with either periodic or Dirichlet boundary conditions. In the massless regime and for some configurations where the subsystem is a single interval, the numerical results for the contour function are compared to the inverse of the local weight function which multiplies the energy-momentum tensor in the corresponding entanglement hamiltonian, found through conformal field theory methods, and a good agreement is observed. A numerical analysis of the contour function for the entanglement entropy is performed also in a massless harmonic chain for a subsystem made by two disjoint intervals.
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
Faes, L; Porta, A; Cucino, R; Cerutti, S; Antolini, R; Nollo, G
2004-06-01
Although the concept of transfer function is intrinsically related to an input-output relationship, the traditional and widely used estimation method merges both feedback and feedforward interactions between the two analyzed signals. This limitation may endanger the reliability of transfer function analysis in biological systems characterized by closed loop interactions. In this study, a method for estimating the transfer function between closed loop interacting signals was proposed and validated in the field of cardiovascular and cardiorespiratory variability. The two analyzed signals x and y were described by a bivariate autoregressive model, and the causal transfer function from x to y was estimated after imposing causality by setting to zero the model coefficients representative of the reverse effects from y to x. The method was tested in simulations reproducing linear open and closed loop interactions, showing a better adherence of the causal transfer function to the theoretical curves with respect to the traditional approach in presence of non-negligible reverse effects. It was then applied in ten healthy young subjects to characterize the transfer functions from respiration to heart period (RR interval) and to systolic arterial pressure (SAP), and from SAP to RR interval. In the first two cases, the causal and non-causal transfer function estimates were comparable, indicating that respiration, acting as exogenous signal, sets an open loop relationship upon SAP and RR interval. On the contrary, causal and traditional transfer functions from SAP to RR were significantly different, suggesting the presence of a considerable influence on the opposite causal direction. Thus, the proposed causal approach seems to be appropriate for the estimation of parameters, like the gain and the phase lag from SAP to RR interval, which have a large clinical and physiological relevance.
Mohebbi, Maryam; Ghassemian, Hassan
2011-08-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia and increases the risk of stroke. Predicting the onset of paroxysmal AF (PAF), based on noninvasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic intervention and to minimize risks for the patients. In this paper, we propose an effective PAF predictor which is based on the analysis of the RR-interval signal. This method consists of three steps: preprocessing, feature extraction and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the RR-interval signal is extracted. In the next step, the recurrence plot (RP) of the RR-interval signal is obtained and five statistically significant features are extracted to characterize the basic patterns of the RP. These features consist of the recurrence rate, length of longest diagonal segments (L(max )), average length of the diagonal lines (L(mean)), entropy, and trapping time. Recurrence quantification analysis can reveal subtle aspects of dynamics not easily appreciated by other methods and exhibits characteristic patterns which are caused by the typical dynamical behavior. In the final step, a support vector machine (SVM)-based classifier is used for PAF prediction. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database (AFPDB) which consists of both 30 min ECG recordings that end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, positive predictivity and negative predictivity were 97%, 100%, 100%, and 96%, respectively. The proposed methodology presents better results than other existing approaches.
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
Unified synchronization criteria in an array of coupled neural networks with hybrid impulses.
Wang, Nan; Li, Xuechen; Lu, Jianquan; Alsaadi, Fuad E
2018-05-01
This paper investigates the problem of globally exponential synchronization of coupled neural networks with hybrid impulses. Two new concepts on average impulsive interval and average impulsive gain are proposed to deal with the difficulties coming from hybrid impulses. By employing the Lyapunov method combined with some mathematical analysis, some efficient unified criteria are obtained to guarantee the globally exponential synchronization of impulsive networks. Our method and criteria are proved to be effective for impulsively coupled neural networks simultaneously with synchronizing impulses and desynchronizing impulses, and we do not need to discuss these two kinds of impulses separately. Moreover, by using our average impulsive interval method, we can obtain an interesting and valuable result for the case of average impulsive interval T a =∞. For some sparse impulsive sequences with T a =∞, the impulses can happen for infinite number of times, but they do not have essential influence on the synchronization property of networks. Finally, numerical examples including scale-free networks are exploited to illustrate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rapid and Simultaneous Prediction of Eight Diesel Quality Parameters through ATR-FTIR Analysis.
Nespeca, Maurilio Gustavo; Hatanaka, Rafael Rodrigues; Flumignan, Danilo Luiz; de Oliveira, José Eduardo
2018-01-01
Quality assessment of diesel fuel is highly necessary for society, but the costs and time spent are very high while using standard methods. Therefore, this study aimed to develop an analytical method capable of simultaneously determining eight diesel quality parameters (density; flash point; total sulfur content; distillation temperatures at 10% (T10), 50% (T50), and 85% (T85) recovery; cetane index; and biodiesel content) through attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy and the multivariate regression method, partial least square (PLS). For this purpose, the quality parameters of 409 samples were determined using standard methods, and their spectra were acquired in ranges of 4000-650 cm -1 . The use of the multivariate filters, generalized least squares weighting (GLSW) and orthogonal signal correction (OSC), was evaluated to improve the signal-to-noise ratio of the models. Likewise, four variable selection approaches were tested: manual exclusion, forward interval PLS (FiPLS), backward interval PLS (BiPLS), and genetic algorithm (GA). The multivariate filters and variables selection algorithms generated more fitted and accurate PLS models. According to the validation, the FTIR/PLS models presented accuracy comparable to the reference methods and, therefore, the proposed method can be applied in the diesel routine monitoring to significantly reduce costs and analysis time.
Rapid and Simultaneous Prediction of Eight Diesel Quality Parameters through ATR-FTIR Analysis
Hatanaka, Rafael Rodrigues; Flumignan, Danilo Luiz; de Oliveira, José Eduardo
2018-01-01
Quality assessment of diesel fuel is highly necessary for society, but the costs and time spent are very high while using standard methods. Therefore, this study aimed to develop an analytical method capable of simultaneously determining eight diesel quality parameters (density; flash point; total sulfur content; distillation temperatures at 10% (T10), 50% (T50), and 85% (T85) recovery; cetane index; and biodiesel content) through attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy and the multivariate regression method, partial least square (PLS). For this purpose, the quality parameters of 409 samples were determined using standard methods, and their spectra were acquired in ranges of 4000–650 cm−1. The use of the multivariate filters, generalized least squares weighting (GLSW) and orthogonal signal correction (OSC), was evaluated to improve the signal-to-noise ratio of the models. Likewise, four variable selection approaches were tested: manual exclusion, forward interval PLS (FiPLS), backward interval PLS (BiPLS), and genetic algorithm (GA). The multivariate filters and variables selection algorithms generated more fitted and accurate PLS models. According to the validation, the FTIR/PLS models presented accuracy comparable to the reference methods and, therefore, the proposed method can be applied in the diesel routine monitoring to significantly reduce costs and analysis time. PMID:29629209
Lyons-Amos, Mark; Padmadas, Sabu S; Durrant, Gabriele B
2014-08-11
To test the contraceptive confidence hypothesis in a modern context. The hypothesis is that women using effective or modern contraceptive methods have increased contraceptive confidence and hence a shorter interval between marriage and first birth than users of ineffective or traditional methods. We extend the hypothesis to incorporate the role of abortion, arguing that it acts as a substitute for contraception in the study context. Moldova, a country in South-East Europe. Moldova exhibits high use of traditional contraceptive methods and abortion compared with other European countries. Data are from a secondary analysis of the 2005 Moldovan Demographic and Health Survey, a nationally representative sample survey. 5377 unmarried women were selected. The outcome measure was the interval between marriage and first birth. This was modelled using a piecewise-constant hazard regression, with abortion and contraceptive method types as primary variables along with relevant sociodemographic controls. Women with high contraceptive confidence (modern method users) have a higher cumulative hazard of first birth 36 months following marriage (0.88 (0.87 to 0.89)) compared with women with low contraceptive confidence (traditional method users, cumulative hazard: 0.85 (0.84 to 0.85)). This is consistent with the contraceptive confidence hypothesis. There is a higher cumulative hazard of first birth among women with low (0.80 (0.79 to 0.80)) and moderate abortion propensities (0.76 (0.75 to 0.77)) than women with no abortion propensity (0.73 (0.72 to 0.74)) 24 months after marriage. Effective contraceptive use tends to increase contraceptive confidence and is associated with a shorter interval between marriage and first birth. Increased use of abortion also tends to increase contraceptive confidence and shorten birth duration, although this effect is non-linear-women with a very high use of abortion tend to have lengthy intervals between marriage and first birth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A genetic algorithm-based framework for wavelength selection on sample categorization.
Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F
2017-08-01
In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Using structural equation modeling for network meta-analysis.
Tu, Yu-Kang; Wu, Yun-Chun
2017-07-14
Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.
Wavelet analysis of the Laser Doppler signal to assess skin perfusion.
Bagno, Andrea; Martini, Romeo
2015-01-01
The hemodynamics of skin microcirculation can be clinically assessed by means of Laser Doppler Fluxmetry. Laser Doppler signals show periodic oscillations because of fluctuations of microvascular perfusion (flowmotion), which are sustained by contractions and relaxations of arteriolar walls rhythmically changing vessels diameter (vasomotion). The wavelet analysis applied to Laser Doppler signals displays six characteristic frequency intervals, from 0.005 to 2 Hz. Each interval is assigned to a specific structure of the cardiovascular system: heart, respiration, vascular myocites, sympathetic terminations, and endothelial cells (dependent and independent on nitric oxide). Therefore, mechanisms of skin perfusion can be investigated through wavelet analysis. In the present work, examples of methods and results of wavelet analysis applied to Laser Doppler signals are reported. Laser Doppler signals were acquired in two groups of patients to check possible changes in vascular activities, before and after occlusive reactive hyperaemia, and before and after revascularization.
Novel method for high-throughput phenotyping of sleep in mice.
Pack, Allan I; Galante, Raymond J; Maislin, Greg; Cater, Jacqueline; Metaxas, Dimitris; Lu, Shan; Zhang, Lin; Von Smith, Randy; Kay, Timothy; Lian, Jie; Svenson, Karen; Peters, Luanne L
2007-01-17
Assessment of sleep in mice currently requires initial implantation of chronic electrodes for assessment of electroencephalogram (EEG) and electromyogram (EMG) followed by time to recover from surgery. Hence, it is not ideal for high-throughput screening. To address this deficiency, a method of assessment of sleep and wakefulness in mice has been developed based on assessment of activity/inactivity either by digital video analysis or by breaking infrared beams in the mouse cage. It is based on the algorithm that any episode of continuous inactivity of > or =40 s is predicted to be sleep. The method gives excellent agreement in C57BL/6J male mice with simultaneous assessment of sleep by EEG/EMG recording. The average agreement over 8,640 10-s epochs in 24 h is 92% (n = 7 mice) with agreement in individual mice being 88-94%. Average EEG/EMG determined sleep per 2-h interval across the day was 59.4 min. The estimated mean difference (bias) per 2-h interval between inactivity-defined sleep and EEG/EMG-defined sleep was only 1.0 min (95% confidence interval for mean bias -0.06 to +2.6 min). The standard deviation of differences (precision) was 7.5 min per 2-h interval with 95% limits of agreement ranging from -13.7 to +15.7 min. Although bias significantly varied by time of day (P = 0.0007), the magnitude of time-of-day differences was not large (average bias during lights on and lights off was +5.0 and -3.0 min per 2-h interval, respectively). This method has applications in chemical mutagenesis and for studies of molecular changes in brain with sleep/wakefulness.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Statistical regularities in the return intervals of volatility
NASA Astrophysics Data System (ADS)
Wang, F.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.
2007-01-01
We discuss recent results concerning statistical regularities in the return intervals of volatility in financial markets. In particular, we show how the analysis of volatility return intervals, defined as the time between two volatilities larger than a given threshold, can help to get a better understanding of the behavior of financial time series. We find scaling in the distribution of return intervals for thresholds ranging over a factor of 25, from 0.6 to 15 standard deviations, and also for various time windows from one minute up to 390 min (an entire trading day). Moreover, these results are universal for different stocks, commodities, interest rates as well as currencies. We also analyze the memory in the return intervals which relates to the memory in the volatility and find two scaling regimes, ℓ<ℓ* with α1=0.64±0.02 and ℓ> ℓ* with α2=0.92±0.04; these exponent values are similar to results of Liu et al. for the volatility. As an application, we use the scaling and memory properties of the return intervals to suggest a possibly useful method for estimating risk.
Transient risk factors for acute traumatic hand injuries: a case‐crossover study in Hong Kong
Chow, C Y; Lee, H; Lau, J; Yu, I T S
2007-01-01
Objectives To identify the remediable transient risk factors of occupational hand injuries in Hong Kong in order to guide the development of prevention strategies. Methods The case‐crossover study design was adopted. Study subjects were workers with acute hand injuries presenting to the government Occupational Medicine Unit for compensation claims within 90 days from the date of injury. Detailed information on exposures to specific transient factors during the 60 minutes prior to the occurrence of the injury, during the same time interval on the day prior to the injury, as well as the usual exposure during the past work‐month was obtained through telephone interviews. Both matched‐pair interval approach and usual frequency approach were adopted to assess the associations between transient exposures in the workplace and the short‐term risk of sustaining a hand injury. Results A total of 196 injured workers were interviewed. The results of the matched‐pair interval analysis matched well with the results obtained using the usual frequency analysis. Seven significant transient risk factors were identified: using malfunctioning equipment/materials, using a different work method, performing an unusual work task, working overtime, feeling ill, being distracted and rushing, with odds ratios ranging from 10.5 to 26.0 in the matched‐pair interval analysis and relative risks ranging between 8.0 and 28.3 with the usual frequency analysis. Wearing gloves was found to have an insignificant protective effect on the occurrence of hand injury in both analyses. Conclusions Using the case‐crossover study design for acute occupational hand injuries, seven transient risk factors that were mostly modifiable were identified. It is suggested that workers and their employers should increase their awareness of these risk factors, and efforts should be made to avoid exposures to these factors by means of engineering and administrative controls supplemented by safety education and training. PMID:16973734
Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun
2015-09-01
This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two components were 8 in the model. The performance of the model was evaluated according to root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP). In the model, RESECV of linalool and linalyl acetate were 0.170 and 0.416, respectively; RM-SEP were 0.188 and 0.364. The results indicated that raw data was pretreated by OSC and FiPLS, the NIR-PLS quantitative analysis model with good robustness, high measurement precision; it could quickly determine the content of linalool and linalyl acetate in lavender essential oil. In addition, the model has a favorable prediction ability. The study also provide a new effective method which could rapid quantitative analysis the major components of Xinjiang lavender essential oil.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
ERIC Educational Resources Information Center
Jackson, Dan
2013-01-01
Statistical inference is problematic in the common situation in meta-analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and…
Perinetti, Giuseppe; Contardo, Luca; Castaldo, Attilio; McNamara, James A; Franchi, Lorenzo
2016-07-01
To evaluate the capability of both cervical vertebral maturation (CVM) stages 3 and 4 (CS3-4 interval) and the peak in standing height to identify the mandibular growth spurt throughout diagnostic reliability analysis. A previous longitudinal data set derived from 24 untreated growing subjects (15 females and nine males,) detailed elsewhere were reanalyzed. Mandibular growth was defined as annual increments in Condylion (Co)-Gnathion (Gn) (total mandibular length) and Co-Gonion Intersection (Goi) (ramus height) and their arithmetic mean (mean mandibular growth [mMG]). Subsequently, individual annual increments in standing height, Co-Gn, Co-Goi, and mMG were arranged according to annual age intervals, with the first and last intervals defined as 7-8 years and 15-16 years, respectively. An analysis was performed to establish the diagnostic reliability of the CS3-4 interval or the peak in standing height in the identification of the maximum individual increments of each Co-Gn, Co-Goi, and mMG measurement at each annual age interval. CS3-4 and standing height peak show similar but variable accuracy across annual age intervals, registering values between 0.61 (standing height peak, Co-Gn) and 0.95 (standing height peak and CS3-4, mMG). Generally, satisfactory diagnostic reliability was seen when the mandibular growth spurt was identified on the basis of the Co-Goi and mMG increments. Both CVM interval CS3-4 and peak in standing height may be used in routine clinical practice to enhance efficiency of treatments requiring identification of the mandibular growth spurt.
An appraisal of statistical procedures used in derivation of reference intervals.
Ichihara, Kiyoshi; Boyd, James C
2010-11-01
When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.
Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C
2018-01-01
Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Winkelstern, I. Z.; Surge, D. M.
2010-12-01
Pliocene sea surface temperature (SST) data from the US Atlantic coastal plain is currently insufficient for a detailed understanding of the climatic shifts that occurred during the period. Previous studies, based on oxygen isotope proxy data from marine shells and bryozoan zooid size analysis, have provided constraints on possible annual-scale SST ranges for the region. However, more data are required to fully understand the forcing mechanisms affecting regional Pliocene climate and evaluate modeled temperature projections. Bivalve sclerochronology (growth increment analysis) is an alternative proxy for SST that can provide annually resolved multi-year time series. The method has been validated in previous studies using modern Arctica, Chione, and Mercenaria. We analyzed Pliocene Mercenaria carolinensis shells using sclerochronologic methods and tested the hypothesis that higher SST ranges are reflected in shells selected from the warmest climate interval (3.5-3.3 Ma, upper Yorktown Formation, Virginia) and lower SST ranges are observable in shells selected from the subsequent cooling interval (2.4-1.8 Ma, Chowan River Formation, North Carolina). These results further establish the validity of growth increment analysis using fossil shells and provide the first large dataset (from the region) of reconstructed annual SST from floating time series during these intervals. These data will enhance our knowledge about a warm climate state that has been identified in the 2007 IPCC report as an analogue for expected global warming. Future work will expand this study to include sampling in Florida to gain detailed information about Pliocene SST along a latitudinal gradient.
A refined method for multivariate meta-analysis and meta-regression.
Jackson, Daniel; Riley, Richard D
2014-02-20
Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.
Deep Learning for Classification of Colorectal Polyps on Whole-slide Images
Korbar, Bruno; Olofson, Andrea M.; Miraflor, Allen P.; Nicka, Catherine M.; Suriawinata, Matthew A.; Torresani, Lorenzo; Suriawinata, Arief A.; Hassanpour, Saeed
2017-01-01
Context: Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. Aims: We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Setting and Design: Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Subjects and Methods: Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. Statistical Analysis: We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Results: Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%–95.9%). Conclusions: Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations. PMID:28828201
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
Perin, Jamie; Walker, Neff
2015-01-01
Background Recent steep declines in child mortality have been attributed in part to increased use of contraceptives and the resulting change in fertility behaviour, including an increase in the time between births. Previous observational studies have documented strong associations between short birth spacing and an increase in the risk of neonatal, infant, and under-five mortality, compared to births with longer preceding birth intervals. In this analysis, we compare two methods to estimate the association between short birth intervals and mortality risk to better inform modelling efforts linking family planning and mortality in children. Objectives Our goal was to estimate the mortality risk for neonates, infants, and young children by preceding birth space using household survey data, controlling for mother-level factors and to compare the results to those from previous analyses with survey data. Design We assessed the potential for confounding when estimating the relative mortality risk by preceding birth interval and estimated mortality risk by birth interval in four categories: less than 18 months, 18–23 months, 24–35 months, and 36 months or longer. We estimated the relative risks among women who were 35 and older at the time of the survey with two methods: in a Cox proportional hazards regression adjusting for potential confounders and also by stratifying Cox regression by mother, to control for all factors that remain constant over a woman's childbearing years. We estimated the overall effects for birth spacing in a meta-analysis with random survey effects. Results We identified several factors known for their associations with neonatal, infant, and child mortality that are also associated with preceding birth interval. When estimating the effect of birth spacing on mortality, we found that regression adjustment for these factors does not substantially change the risk ratio for short birth intervals compared to an unadjusted mortality ratio. For birth intervals less than 18 months, standard regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18–2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52–1.63), a decline of almost one-third in the effect on neonatal mortality. Conclusions Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality. PMID:26562139
Perin, Jamie; Walker, Neff
2015-01-01
Recent steep declines in child mortality have been attributed in part to increased use of contraceptives and the resulting change in fertility behaviour, including an increase in the time between births. Previous observational studies have documented strong associations between short birth spacing and an increase in the risk of neonatal, infant, and under-five mortality, compared to births with longer preceding birth intervals. In this analysis, we compare two methods to estimate the association between short birth intervals and mortality risk to better inform modelling efforts linking family planning and mortality in children. Our goal was to estimate the mortality risk for neonates, infants, and young children by preceding birth space using household survey data, controlling for mother-level factors and to compare the results to those from previous analyses with survey data. We assessed the potential for confounding when estimating the relative mortality risk by preceding birth interval and estimated mortality risk by birth interval in four categories: less than 18 months, 18-23 months, 24-35 months, and 36 months or longer. We estimated the relative risks among women who were 35 and older at the time of the survey with two methods: in a Cox proportional hazards regression adjusting for potential confounders and also by stratifying Cox regression by mother, to control for all factors that remain constant over a woman's childbearing years. We estimated the overall effects for birth spacing in a meta-analysis with random survey effects. We identified several factors known for their associations with neonatal, infant, and child mortality that are also associated with preceding birth interval. When estimating the effect of birth spacing on mortality, we found that regression adjustment for these factors does not substantially change the risk ratio for short birth intervals compared to an unadjusted mortality ratio. For birth intervals less than 18 months, standard regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18-2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52-1.63), a decline of almost one-third in the effect on neonatal mortality. Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality.
Confidence intervals for expected moments algorithm flood quantile estimates
Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.
2001-01-01
Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.
NASA Astrophysics Data System (ADS)
Mendes, Odim; Oliveira Domingues, Margarete; Echer, Ezequiel; Hajra, Rajkumar; Everton Menconi, Varlei
2017-08-01
Considering the magnetic reconnection and the viscous interaction as the fundamental mechanisms for transfer particles and energy into the magnetosphere, we study the dynamical characteristics of auroral electrojet (AE) index during high-intensity, long-duration continuous auroral activity (HILDCAA) events, using a long-term geomagnetic database (1975-2012), and other distinct interplanetary conditions (geomagnetically quiet intervals, co-rotating interaction regions (CIRs)/high-speed streams (HSSs) not followed by HILDCAAs, and events of AE comprised in global intense geomagnetic disturbances). It is worth noting that we also study active but non-HILDCAA intervals. Examining the geomagnetic AE index, we apply a dynamics analysis composed of the phase space, the recurrence plot (RP), and the recurrence quantification analysis (RQA) methods. As a result, the quantification finds two distinct clusterings of the dynamical behaviours occurring in the interplanetary medium: one regarding a geomagnetically quiet condition regime and the other regarding an interplanetary activity regime. Furthermore, the HILDCAAs seem unique events regarding a visible, intense manifestations of interplanetary Alfvénic waves; however, they are similar to the other kinds of conditions regarding a dynamical signature (based on RQA), because it is involved in the same complex mechanism of generating geomagnetic disturbances. Also, by characterizing the proper conditions of transitions from quiescent conditions to weaker geomagnetic disturbances inside the magnetosphere and ionosphere system, the RQA method indicates clearly the two fundamental dynamics (geomagnetically quiet intervals and HILDCAA events) to be evaluated with magneto-hydrodynamics simulations to understand better the critical processes related to energy and particle transfer into the magnetosphere-ionosphere system. Finally, with this work, we have also reinforced the potential applicability of the RQA method for characterizing nonlinear geomagnetic processes related to the magnetic reconnection and the viscous interaction affecting the magnetosphere.
NASA Astrophysics Data System (ADS)
Rouillon, M.; Taylor, M. P.; Dong, C.
2016-12-01
This research assesses the advantages of integrating field portable X-ray Fluorescence (pXRF) technology for reducing the risk and increase confidence of decision making for metal-contaminated site assessments. Metal-contaminated sites are often highly heterogeneous and require a high sampling density to accurately characterize the distribution and concentration of contaminants. The current regulatory assessment approaches rely on a small number of samples processed using standard wet-chemistry methods. In New South Wales (NSW), Australia, the current notification trigger for characterizing metal-contaminated sites require the upper 95% confidence interval of the site mean to equal or exceed the relevant guidelines. The method's low `minimum' sampling requirements can misclassify sites due to the heterogeneous nature of soil contamination, leading to inaccurate decision making. To address this issue, we propose integrating infield pXRF analysis with the established sampling method to overcome sampling limitations. This approach increases the minimum sampling resolution and reduces the 95% CI of the site mean. Infield pXRF analysis at contamination hotspots enhances sample resolution efficiently and without the need to return to the site. In this study, the current and proposed pXRF site assessment methods are compared at five heterogeneous metal-contaminated sites by analysing the spatial distribution of contaminants, 95% confidence intervals of site means, and the sampling and analysis uncertainty associated with each method. Finally, an analysis of costs associated with both the current and proposed methods is presented to demonstrate the advantages of incorporating pXRF into metal-contaminated site assessments. The data shows that pXRF integrated site assessments allows for faster, cost-efficient, characterisation of metal-contaminated sites with greater confidence for decision making.
Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I
2015-05-15
This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
NASA Astrophysics Data System (ADS)
Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe
2017-04-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
Fan, Yong; Du, Jin Peng; Liu, Ji Jun; Zhang, Jia Nan; Qiao, Huan Huan; Liu, Shi Chang; Hao, Ding Jun
2018-06-01
A miniature spine-mounted robot has recently been introduced to further improve the accuracy of pedicle screw placement in spine surgery. However, the differences in accuracy between the robotic-assisted (RA) technique and the free-hand with fluoroscopy-guided (FH) method for pedicle screw placement are controversial. A meta-analysis was conducted to focus on this problem. Several randomized controlled trials (RCTs) and cohort studies involving RA and FH and published before January 2017 were searched for using the Cochrane Library, Ovid, Web of Science, PubMed, and EMBASE databases. A total of 55 papers were selected. After the full-text assessment, 45 clinical trials were excluded. The final meta-analysis included 10 articles. The accuracy of pedicle screw placement within the RA group was significantly greater than the accuracy within the FH group (odds ratio 95%, "perfect accuracy" confidence interval: 1.38-2.07, P < .01; odds ratio 95% "clinically acceptable" Confidence Interval: 1.17-2.08, P < .01). There are significant differences in accuracy between RA surgery and FH surgery. It was demonstrated that the RA technique is superior to the conventional method in terms of the accuracy of pedicle screw placement.
Modified Confidence Intervals for the Mean of an Autoregressive Process.
1985-08-01
Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our
NASA Astrophysics Data System (ADS)
Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian
2016-04-01
Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.
NASA Astrophysics Data System (ADS)
Wang, Meng; Zhang, Huaiqiang; Zhang, Kan
2017-10-01
Focused on the circumstance that the equipment using demand in the short term and the development demand in the long term should be made overall plans and took into consideration in the weapons portfolio planning and the practical problem of the fuzziness in the definition of equipment capacity demand. The expression of demand is assumed to be an interval number or a discrete number. With the analysis method of epoch-era, a long planning cycle is broke into several short planning cycles with different demand value. The multi-stage stochastic programming model is built aimed at maximize long-term planning cycle demand under the constraint of budget, equipment development time and short planning cycle demand. The scenario tree is used to discretize the interval value of the demand, and genetic algorithm is designed to solve the problem. At last, a case is studied to demonstrate the feasibility and effectiveness of the proposed mode.
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
A Solution Space for a System of Null-State Partial Differential Equations: Part 2
NASA Astrophysics Data System (ADS)
Flores, Steven M.; Kleban, Peter
2015-01-01
This article is the second of four that completely and rigorously characterize a solution space for a homogeneous system of 2 N + 3 linear partial differential equations in 2 N variables that arises in conformal field theory (CFT) and multiple Schramm-Löwner evolution (SLE). The system comprises 2 N null-state equations and three conformal Ward identities which govern CFT correlation functions of 2 N one-leg boundary operators. In the first article (Flores and Kleban, Commun Math Phys, arXiv:1212.2301, 2012), we use methods of analysis and linear algebra to prove that dim , with C N the Nth Catalan number. The analysis of that article is complete except for the proof of a lemma that it invokes. The purpose of this article is to provide that proof. The lemma states that if every interval among ( x 2, x 3), ( x 3, x 4),…,( x 2 N-1, x 2 N ) is a two-leg interval of (defined in Flores and Kleban, Commun Math Phys, arXiv:1212.2301, 2012), then F vanishes. Proving this lemma by contradiction, we show that the existence of such a nonzero function implies the existence of a non-vanishing CFT two-point function involving primary operators with different conformal weights, an impossibility. This proof (which is rigorous in spite of our occasional reference to CFT) involves two different types of estimates, those that give the asymptotic behavior of F as the length of one interval vanishes, and those that give this behavior as the lengths of two intervals vanish simultaneously. We derive these estimates by using Green functions to rewrite certain null-state PDEs as integral equations, combining other null-state PDEs to obtain Schauder interior estimates, and then repeatedly integrating the integral equations with these estimates until we obtain optimal bounds. Estimates in which two interval lengths vanish simultaneously divide into two cases: two adjacent intervals and two non-adjacent intervals. The analysis of the latter case is similar to that for one vanishing interval length. In contrast, the analysis of the former case is more complicated, involving a Green function that contains the Jacobi heat kernel as its essential ingredient.
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Labonté, Josiane; Roy, Jean-Philippe; Dubuc, Jocelyn; Buczinski, Sébastien
2015-06-01
Cardiac troponin I (cTnI) has been shown to be an accurate predictor of myocardial injury in cattle. The point-of-care i-STAT 1 immunoassay can be used to quantify blood cTnI in cattle. However, the cTnI reference interval in whole blood of healthy early lactating dairy cows remains unknown. To determine a blood cTnI reference interval in healthy early lactating Holstein dairy cows using the analyzer i-STAT 1. Forty healthy lactating dairy Holstein cows (0-60 days in milk) were conveniently selected from four commercial dairy farms. Each selected cow was examined by a veterinarian and transthoracic echocardiography was performed. A cow-side blood cTnI dosage was measured at the same time. A bootstrap statistical analysis method using unrestricted resampling was used to determine a reference interval for blood cTnI values. Forty healthy cows were recruited in the study. Median blood cTnI was 0.02 ng/mL (minimum: 0.00, maximum: 0.05). Based on the bootstrap analysis method with 40 cases, the 95th percentile of cTnI values in healthy cows was 0.036 ng/mL (90% CI: 0.02-0.05 ng/mL). A reference interval for blood cTnI values in healthy lactating cows was determined. Further research is needed to determine whether cTnI blood values could be used to diagnose and provide a prognosis for cardiac and noncardiac diseases in lactating dairy cows. Copyright © 2015 Elsevier B.V. All rights reserved.
[Investigation of reference intervals of blood gas and acid-base analysis assays in China].
Zhang, Lu; Wang, Wei; Wang, Zhiguo
2015-10-01
To investigate and analyze the upper and lower limits and their sources of reference intervals in blood gas and acid-base analysis assays. The data of reference intervals were collected, which come from the first run of 2014 External Quality Assessment (EQA) program in blood gas and acid-base analysis assays performed by National Center for Clinical Laboratories (NCCL). All the abnormal values and errors were eliminated. Data statistics was performed by SPSS 13.0 and Excel 2007 referring to upper and lower limits of reference intervals and sources of 7 blood gas and acid-base analysis assays, i.e. pH value, partial pressure of carbon dioxide (PCO2), partial pressure of oxygen (PO2), Na+, K+, Ca2+ and Cl-. Values were further grouped based on instrument system and the difference between each group were analyzed. There were 225 laboratories submitting the information on the reference intervals they had been using. The three main sources of reference intervals were National Guide to Clinical Laboratory Procedures [37.07% (400/1 079)], instructions of instrument manufactures [31.23% (337/1 079)] and instructions of reagent manufactures [23.26% (251/1 079)]. Approximately 35.1% (79/225) of the laboratories had validated the reference intervals they used. The difference of upper and lower limits in most assays among 7 laboratories was moderate, both minimum and maximum (i.e. the upper limits of pH value was 7.00-7.45, the lower limits of Na+ was 130.00-156.00 mmol/L), and mean and median (i.e. the upper limits of K+ was 5.04 mmol/L and 5.10 mmol/L, the upper limits of PCO2 was 45.65 mmHg and 45.00 mmHg, 1 mmHg = 0.133 kPa), as well as the difference in P2.5 and P97.5 between each instrument system group. It was shown by Kruskal-Wallis method that the P values of upper and lower limits of all the parameters were lower than 0.001, expecting the lower limits of Na+ with P value 0.029. It was shown by Mann-Whitney that the statistic differences were found among instrument system groups and between most of two instrument system groups in all assays. The difference of reference intervals of blood gas and acid-base analysis assays used in China laboratories is moderate, which is better than other specialties in clinical laboratories.
Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-07-01
Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Advances in Statistical Methods for Substance Abuse Prevention Research
MacKinnon, David P.; Lockwood, Chondra M.
2010-01-01
The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467
Smuck, Matthew; Tomkins-Lane, Christy; Ith, Ma Agnes; Jarosz, Renata; Kao, Ming-Chih Jeffrey
2017-01-01
Background Accurate measurement of physical performance in individuals with musculoskeletal pain is essential. Accelerometry is a powerful tool for this purpose, yet the current methods designed to evaluate energy expenditure are not optimized for this population. The goal of this study is to empirically derive a method of accelerometry analysis specifically for musculoskeletal pain populations. Methods We extracted data from 6,796 participants in the 2003–4 National Health and Nutrition Examination Survey (NHANES) including: 7-day accelerometry, health and pain questionnaires, and anthropomorphics. Custom macros were used for data processing, complex survey regression analyses, model selection, and statistical adjustment. After controlling for a multitude of variables that influence physical activity, we investigated whether distinct accelerometry profiles accompany pain in different locations of the body; and we identified the intensity intervals that best characterized these profiles. Results Unique accelerometry profiles were observed for pain in different body regions, logically clustering together based on proximity. Based on this, the following novel intervals (counts/minute) were identified and defined: Performance Sedentary (PSE) = 1–100, Performance Light 1 (PL1) = 101–350, Performance Light 2 (PL2) = 351–800, Performance Light 3 (PL3) = 801–2500, and Performance Moderate/Vigorous (PMV) = 2501–30000. The refinement of accelerometry signals into these new intervals, including 3 distinct ranges that fit inside the established light activity range, best captures alterations in real-life physical performance as a result of regional pain. Discussion and conclusions These new accelerometry intervals provide a model for objective measurement of real-life physical performance in people with pain and musculoskeletal disorders, with many potential uses. They may be used to better evaluate the relationship between pain and daily physical function, monitor musculoskeletal disease progression, gauge disease severity, inform exercise prescription, and quantify the functional impact of treatments. Based on these findings, we recommend that future studies of pain and musculoskeletal disorders analyze accelerometry output based on these new “physical performance” intervals. PMID:28235039
Prokinetics for the treatment of functional dyspepsia: Bayesian network meta-analysis.
Yang, Young Joo; Bang, Chang Seok; Baik, Gwang Ho; Park, Tae Young; Shin, Suk Pyo; Suk, Ki Tae; Kim, Dong Joon
2017-06-26
Controversies persist regarding the effect of prokinetics for the treatment of functional dyspepsia (FD). This study aimed to assess the comparative efficacy of prokinetic agents for the treatment of FD. Randomized controlled trials (RCTs) of prokinetics for the treatment of FD were identified from core databases. Symptom response rates were extracted and analyzed using odds ratios (ORs). A Bayesian network meta-analysis was performed using the Markov chain Monte Carlo method in WinBUGS and NetMetaXL. In total, 25 RCTs, which included 4473 patients with FD who were treated with 6 different prokinetics or placebo, were identified and analyzed. Metoclopramide showed the best surface under the cumulative ranking curve (SUCRA) probability (92.5%), followed by trimebutine (74.5%) and mosapride (63.3%). However, the therapeutic efficacy of metoclopramide was not significantly different from that of trimebutine (OR:1.32, 95% credible interval: 0.27-6.06), mosapride (OR: 1.99, 95% credible interval: 0.87-4.72), or domperidone (OR: 2.04, 95% credible interval: 0.92-4.60). Metoclopramide showed better efficacy than itopride (OR: 2.79, 95% credible interval: 1.29-6.21) and acotiamide (OR: 3.07, 95% credible interval: 1.43-6.75). Domperidone (SUCRA probability 62.9%) showed better efficacy than itopride (OR: 1.37, 95% credible interval: 1.07-1.77) and acotiamide (OR: 1.51, 95% credible interval: 1.04-2.18). Metoclopramide, trimebutine, mosapride, and domperidone showed better efficacy for the treatment of FD than itopride or acotiamide. Considering the adverse events related to metoclopramide or domperidone, the short-term use of these agents or the alternative use of trimebutine or mosapride could be recommended for the symptomatic relief of FD.
Al-Jasmi, Fatima; Al-Mansoor, Fatima; Alsheiba, Aisha; Carter, Anne O.; Carter, Thomas P.; Hossain, M. Moshaddeque
2002-01-01
OBJECTIVE: To investigate whether a short interpregnancy interval is a risk factor for preterm birth in Emirati women, where there is a wide range of interpregnancy intervals and uniformity in potentially confounding factors. METHODS: A case-control design based on medical records was used. A case was defined as a healthy multiparous Emirati woman delivering a healthy singleton spontaneously before 37 weeks of gestation between 1997 and 2000, and a control was defined as the next eligible similar woman delivering after 37 weeks of gestation. Women were excluded if there was no information available about their most recent previous pregnancy or if it had resulted in a multiple or preterm birth. Data collected from charts and delivery room records were analysed using the STATA statistical package. All variables found to be valid, stable and significant by univariate analysis were included in multivariate logistic regression analysis. FINDINGS: There were 128 cases who met the eligibility criteria; 128 controls were selected. Short interpregnancy intervals were significantly associated with case status (P<0.05). The multivariate adjusted odds ratios for the 1st, 2nd, and 4th quartiles of interpregnancy interval compared with the lowest-risk 3rd quartile were 8.2, 5.4, and 2.0 (95% confidence intervals: 3.5-19.2, 2.4-12.6, and 0.9- 4.5 respectively). CONCLUSION: A short interpregnancy interval is a risk factor for spontaneous preterm birth in Emirati women. The magnitude of the risk and the risk gradient between exposure quartiles suggest that the risk factor is causal and that its modification would reduce the risk of preterm birth. PMID:12481208
Factor analytic reduction of the carotid-cardiac baroreflex parameters
NASA Technical Reports Server (NTRS)
Ludwig, David A.
1989-01-01
An accepted method for measuring the responsiveness of the carotid-cardiac baroreflex to arterial pressure changes is to artificially stimulate the baroreceptors in the neck. This is accomplished by using a pressurized neck cuff which constricts and distends the carotid artery and subsequently stimulates the baroreceptors. Nine physiological responses to this type of stimulation are quantified and used as indicators of the baroreflex. Thirty male humans between the ages 27 and 46 underwent the carotid-cardiac baroreflex test. The data for the nine response parameters were analyzed by principle component factor analysis. The results of this analysis indicated that 93 percent of the total variance across all nine parameters could be explained in four dimensions. Examination of the factor loadings following an orthogonal rotation of the principle components indicated four well defined dimensions. The first two dimensions reflected location points for R-R interval and carotid distending pressure respectively. The third dimension was composed of measures reflecting the gain of the reflex. The fourth dimension was the ratio of the resting R-R interval to R-R interval during simulated hypertension. The data suggests that the analysis of all nine baroreflex parameters is redundant.
Four applications of permutation methods to testing a single-mediator model.
Taylor, Aaron B; MacKinnon, David P
2012-09-01
Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.
Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif
2017-05-01
Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.
Dynamic association rules for gene expression data analysis.
Chen, Shu-Chuan; Tsai, Tsung-Hsien; Chung, Cheng-Han; Li, Wen-Hsiung
2015-10-14
The purpose of gene expression analysis is to look for the association between regulation of gene expression levels and phenotypic variations. This association based on gene expression profile has been used to determine whether the induction/repression of genes correspond to phenotypic variations including cell regulations, clinical diagnoses and drug development. Statistical analyses on microarray data have been developed to resolve gene selection issue. However, these methods do not inform us of causality between genes and phenotypes. In this paper, we propose the dynamic association rule algorithm (DAR algorithm) which helps ones to efficiently select a subset of significant genes for subsequent analysis. The DAR algorithm is based on association rules from market basket analysis in marketing. We first propose a statistical way, based on constructing a one-sided confidence interval and hypothesis testing, to determine if an association rule is meaningful. Based on the proposed statistical method, we then developed the DAR algorithm for gene expression data analysis. The method was applied to analyze four microarray datasets and one Next Generation Sequencing (NGS) dataset: the Mice Apo A1 dataset, the whole genome expression dataset of mouse embryonic stem cells, expression profiling of the bone marrow of Leukemia patients, Microarray Quality Control (MAQC) data set and the RNA-seq dataset of a mouse genomic imprinting study. A comparison of the proposed method with the t-test on the expression profiling of the bone marrow of Leukemia patients was conducted. We developed a statistical way, based on the concept of confidence interval, to determine the minimum support and minimum confidence for mining association relationships among items. With the minimum support and minimum confidence, one can find significant rules in one single step. The DAR algorithm was then developed for gene expression data analysis. Four gene expression datasets showed that the proposed DAR algorithm not only was able to identify a set of differentially expressed genes that largely agreed with that of other methods, but also provided an efficient and accurate way to find influential genes of a disease. In the paper, the well-established association rule mining technique from marketing has been successfully modified to determine the minimum support and minimum confidence based on the concept of confidence interval and hypothesis testing. It can be applied to gene expression data to mine significant association rules between gene regulation and phenotype. The proposed DAR algorithm provides an efficient way to find influential genes that underlie the phenotypic variance.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Siegel, David A.; Obrien, Margaret C.; Sorensen, Jen C.; Konnoff, Daniel A.; Brody, Eric A.; Mueller, James L.; Davis, Curtiss O.; Rhea, W. Joseph
1995-01-01
The accurate determination of upper ocean apparent optical properties (AOP's) is essential for the vicarious calibration of the sea-viewing wide field-of-view sensor (SeaWiFS) instrument and the validation of the derived data products. To evaluate the role that data analysis methods have upon values of derived AOP's, the first Data Analysis Round-Robin (DARR-94) workshop was sponsored by the SeaWiFS Project during 21-23 July, 1994. The focus of this intercomparison study was the estimation of the downwelling irradiance spectrum just beneath the sea surface, E(sub d)(0(sup -), lambda); the upwelling nadir radiance just beneath the sea surface, L(sub u)(0(sup -), lambda); and the vertical profile of the diffuse attenuation coefficient spectrum, K(sub d)(z, lambda). In the results reported here, different methodologies from four research groups were applied to an identical set of 10 spectroradiometry casts in order to evaluate the degree to which data analysis methods influence AOP estimation, and whether any general improvements can be made. The overall results of DARR-94 are presented in Chapter 1 and the individual methods of the four groups are presented in Chapters 2-5. The DARR-94 results do not show a clear winner among data analysis methods evaluated. It is apparent, however, that some degree of outlier rejection is required in order to accurately estimate L(sub u)(0(sup -), lambda) or E(sub d)(0(sup -), lambda). Furthermore, the calculation, evaluation and exploitation of confidence intervals for the AOP determinations needs to be explored. That is, the SeaWiFS calibration and validation problem should be recast in statistical terms where the in situ AOP values are statistical estimates with known confidence intervals.
Validation of Heart Rate Monitor Polar RS800 for Heart Rate Variability Analysis During Exercise.
Hernando, David; Garatachea, Nuria; Almeida, Rute; Casajús, Jose A; Bailón, Raquel
2018-03-01
Hernando, D, Garatachea, N, Almeida, R, Casajús, JA, and Bailón, R. Validation of heart rate monitor Polar RS800 for heart rate variability analysis during exercise. J Strength Cond Res 32(3): 716-725, 2018-Heart rate variability (HRV) analysis during exercise is an interesting noninvasive tool to measure the cardiovascular response to the stress of exercise. Wearable heart rate monitors are a comfortable option to measure interbeat (RR) intervals while doing physical activities. It is necessary to evaluate the agreement between HRV parameters derived from the RR series recorded by wearable devices and those derived from an electrocardiogram (ECG) during dynamic exercise of low to high intensity. Twenty-three male volunteers performed an exercise stress test on a cycle ergometer. Subjects wore a Polar RS800 device, whereas ECG was also recorded simultaneously to extract the reference RR intervals. A time-frequency spectral analysis was performed to extract the instantaneous mean heart rate (HRM), and the power of low-frequency (PLF) and high-frequency (PHF) components, the latter centered on the respiratory frequency. Analysis was done in intervals of different exercise intensity based on oxygen consumption. Linear correlation, reliability, and agreement were computed in each interval. The agreement between the RR series obtained from the Polar device and from the ECG is high throughout the whole test although the shorter the RR is, the more differences there are. Both methods are interchangeable when analyzing HRV at rest. At high exercise intensity, HRM and PLF still presented a high correlation (ρ > 0.8) and excellent reliability and agreement indices (above 0.9). However, the PHF measurements from the Polar showed reliability and agreement coefficients around 0.5 or lower when the level of the exercise increases (for levels of O2 above 60%).
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
Fuzzy based finger vein recognition with rotation invariant feature matching
NASA Astrophysics Data System (ADS)
Ezhilmaran, D.; Joseph, Rose Bindu
2017-11-01
Finger vein recognition is a promising biometric with commercial applications which is explored widely in the recent years. In this paper, a finger vein recognition system is proposed using rotation invariant feature descriptors for matching after enhancing the finger vein images with an interval type-2 fuzzy method. SIFT features are extracted and matched using a matching score based on Euclidian distance. Rotation invariance of the proposed method is verified in the experiment and the results are compared with SURF matching and minutiae matching. It is seen that rotation invariance is verified and the poor quality issues are solved efficiently with the designed system of finger vein recognition during the analysis. The experiments underlines the robustness and reliability of the interval type-2 fuzzy enhancement and SIFT feature matching.
NASA Technical Reports Server (NTRS)
Roman, Monserrate C.; Jones, Kathy U.; Oubre, Cherie M.; Castro, Victoria; Ott, Mark C.; Birmele, Michele; Venkateswaran, Kasthuri J.; Vaishampayan, Parag A.
2013-01-01
Current methods for microbial detection: a) Labor & time intensive cultivation-based approaches that can fail to detect or characterize all cells present. b) Requires collection of samples on orbit and transportation back to ground for analysis. Disadvantages to current detection methods: a) Unable to perform quick and reliable detection on orbit. b) Lengthy sampling intervals. c) No microbe identification.
“Magnitude-based Inference”: A Statistical Review
Welsh, Alan H.; Knight, Emma J.
2015-01-01
ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387
Multivariable nonlinear analysis of foreign exchange rates
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo
2003-05-01
We analyze the multivariable time series of foreign exchange rates. These are price movements that have often been analyzed, and dealing time intervals and spreads between bid and ask prices. Considering dealing time intervals as event timing such as neurons’ firings, we use raster plots (RPs) and peri-stimulus time histograms (PSTHs) which are popular methods in the field of neurophysiology. Introducing special processings to obtaining RPs and PSTHs time histograms for analyzing exchange rates time series, we discover that there exists dynamical interaction among three variables. We also find that adopting multivariables leads to improvements of prediction accuracy.
Tobacco smoking and oral clefts: a meta-analysis.
Little, Julian; Cardy, Amanda; Munger, Ronald G.
2004-01-01
OBJECTIVE: To examine the association between maternal smoking and non-syndromic orofacial clefts in infants. METHODS: A meta-analysis of the association between maternal smoking during pregnancy was carried out using data from 24 case-control and cohort studies. FINDINGS: Consistent, moderate and statistically significant associations were found between maternal smoking and cleft lip, with or without cleft palate (relative risk 1.34, 95% confidence interval 1.25-1.44) and between maternal smoking and cleft palate (relative risk 1.22, 95% confidence interval 1.10-1.35). There was evidence of a modest dose-response effect for cleft lip with or without cleft palate. CONCLUSION: The evidence of an association between maternal tobacco smoking and orofacial clefts is strong enough to justify its use in anti-smoking campaigns. PMID:15112010
Automated measurements for individualized heart rate correction of the QT interval.
Mason, Jay W; Moon, Thomas E
2015-04-01
Subject-specific electrocardiographic QT interval correction for heart rate is often used in clinical trials with frequent electrocardiographic recordings. However, in these studies relatively few 10-s, 12-lead electrocardiograms may be available for calculating the individual correction. Highly automated QT and RR measurement tools have made it practical to measure electrocardiographic intervals on large volumes of continuous electrocardiogram data. The purpose of this study was to determine whether an automated method can be used in lieu of a manual method. In 49 subjects who completed all treatments in a four-armed crossover study we compared two methods for derivation of individualized rate-correction coefficients: manual measurement on 10-s electrocardiograms and automated measurement of QT and RR during continuous 24-h electrocardiogram recordings. The four treatments, received by each subject in a latin-square randomization sequence were placebo, moxifloxacin, and two doses of an investigational drug. Analysis of continuous electrocardiogram data yielded a lower standard deviation of QT:RR regression values than the manual method, though the differences were not statistically significant. The within-subject and within-treatment coefficients of variation between the manual and automated methods were not significantly different. Corrected QT values from the two methods had similar rates of true and false positive identification of moxifloxacin's QT prolonging effect. An automated method for individualized rate correction applied to continuous electrocardiogram data could be advantageous in clinical trials, as the automated method is simpler, is based upon a much larger volume of data, yields similar results, and requires no human over-reading of the measurements. © The Author(s) 2015.
Sampling interval analysis and CDF generation for grain-scale gravel bed topography
USDA-ARS?s Scientific Manuscript database
In river hydraulics, there is a continuing need for characterizing bed elevations to arrive at quantitative roughness measures that can be used in predicting flow depth and for improved prediction of fine-sediment transport over and through coarse beds. Recently published prediction methods require...
Advanced Tomographic Imaging Methods for the Analysis of Materials
1991-08-01
used in composite manufacture: aluminum, silicon carbide, and titanium aluminide . Also depicted in Fig. 2 are the energy intervals which can...SiC-fiber (SCS6) in a titanium - aluminide matrix. The contrast between SiC and AtIis only 10% over a broad eiaergy range. Therefore, distinguishing the...borehole logging, orrodent detection on turbine blades , kerogen analysis of shale, and contents of coals (sulfur, minerals, and btu). APSTNG
Time Transfer from Combined Analysis of GPS and TWSTFT Data
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting 565 TIME TRANSFER FROM COMBINED ANALYSIS OF GPS AND TWSTFT DATA...bipm.org Abstract This paper presents the time transfer results obtained from the combination of GPS data and TWSTFT data. Two different methods...view, constrained by TWSTFT data. Using the Vondrak-Cepek algorithm, the second approach (named PPP+TW) combines the TWSTFT time transfer data with
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
A comprehensive prediction and evaluation method of pilot workload
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
BACKGROUND: The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. OBJECTIVE: A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. METHODS: The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. RESULTS: Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. CONCLUSION: A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%. PMID:29710742
Quantification of Uncertainty in the Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.
2017-12-01
Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.
Ford, C H; Tsaltas, G C; Osborne, P A; Addetia, K
1996-03-01
A flow cytometric method of studying the internalization of a monoclonal antibody (Mab) directed against carcinoembryonic antigen (CEA) has been compared with Western blotting, using three human colonic cancer cell lines which express varying amounts of the target antigen. Cell samples incubated for increasing time intervals with fluoresceinated or unlabelled Mab were analyzed using flow cytometry or polyacrylamide gel electrophoresis and Western blotting. SDS/PAGE analysis of cytosolic and membrane components of solubilized cells from the cell lines provided evidence of non-degraded internalized anti-CEA Mab throughout seven half hour intervals, starting at 5 min. Internalized anti-CEA was detected in the case of high CEA expressing cell lines (LS174T, SKCO1). Very similar results were obtained with an anti-fluorescein flow cytometric assay. Given that these two methods consistently provided comparable results, use of flow cytometry for the detection of internalized antibody is suggested as a rapid alternative to most currently used methods for assessing antibody internalization. The question of the endocytic route followed by CEA-anti-CEA complexes was addressed by using hypertonic medium to block clathrin mediated endocytosis.
[Automatic Extraction and Analysis of Dosimetry Data in Radiotherapy Plans].
Song, Wei; Zhao, Di; Lu, Hong; Zhang, Biyun; Ma, Jun; Yu, Dahai
To improve the efficiency and accuracy of extraction and analysis of dosimetry data in radiotherapy plans for a batch of patients. With the interface function provided in Matlab platform, a program was written to extract the dosimetry data exported from treatment planning system in DICOM RT format and exported the dose-volume data to an Excel file with the SPSS compatible format. This method was compared with manual operation for 14 gastric carcinoma patients to validate the efficiency and accuracy. The output Excel data were compatible with SPSS in format, the dosimetry data error for PTV dose interval of 90%-98%, PTV dose interval of 99%-106% and all OARs were -3.48E-5 ± 3.01E-5, -1.11E-3 ± 7.68E-4, -7.85E-5 ± 9.91E-5 respectively. Compared with manual operation, the time required was reduced from 5.3 h to 0.19 h and input error was reduced from 0.002 to 0. The automatic extraction of dosimetry data in DICOM RT format for batch patients, the SPSS compatible data exportation, quick analysis were achieved in this paper. The efficiency of clinical researches based on dosimetry data analysis of large number of patients will be improved with this methods.
Identification of speech transients using variable frame rate analysis and wavelet packets.
Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung
2006-01-01
Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.
Melkonian, D; Korner, A; Meares, R; Bahramali, H
2012-10-01
A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A real-time approach for heart rate monitoring using a Hilbert transform in seismocardiograms.
Jafari Tadi, Mojtaba; Lehtonen, Eero; Hurnanen, Tero; Koskinen, Juho; Eriksson, Jonas; Pänkäälä, Mikko; Teräs, Mika; Koivisto, Tero
2016-11-01
Heart rate monitoring helps in assessing the functionality and condition of the cardiovascular system. We present a new real-time applicable approach for estimating beat-to-beat time intervals and heart rate in seismocardiograms acquired from a tri-axial microelectromechanical accelerometer. Seismocardiography (SCG) is a non-invasive method for heart monitoring which measures the mechanical activity of the heart. Measuring true beat-to-beat time intervals from SCG could be used for monitoring of the heart rhythm, for heart rate variability analysis and for many other clinical applications. In this paper we present the Hilbert adaptive beat identification technique for the detection of heartbeat timings and inter-beat time intervals in SCG from healthy volunteers in three different positions, i.e. supine, left and right recumbent. Our method is electrocardiogram (ECG) independent, as it does not require any ECG fiducial points to estimate the beat-to-beat intervals. The performance of the algorithm was tested against standard ECG measurements. The average true positive rate, positive prediction value and detection error rate for the different positions were, respectively, supine (95.8%, 96.0% and ≃0.6%), left (99.3%, 98.8% and ≃0.001%) and right (99.53%, 99.3% and ≃0.01%). High correlation and agreement was observed between SCG and ECG inter-beat intervals (r > 0.99) for all positions, which highlights the capability of the algorithm for SCG heart monitoring from different positions. Additionally, we demonstrate the applicability of the proposed method in smartphone based SCG. In conclusion, the proposed algorithm can be used for real-time continuous unobtrusive cardiac monitoring, smartphone cardiography, and in wearable devices aimed at health and well-being applications.
Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Hafezalkotob, Arian; Hafezalkotob, Ashkan
2017-06-01
A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
NASA Astrophysics Data System (ADS)
Otto, Friederike E. L.; van der Wiel, Karin; van Oldenborgh, Geert Jan; Philip, Sjoukje; Kew, Sarah F.; Uhe, Peter; Cullen, Heidi
2018-02-01
On 4-6 December 2015, storm Desmond caused very heavy rainfall in Northern England and Southern Scotland which led to widespread flooding. A week after the event we provided an initial assessment of the influence of anthropogenic climate change on the likelihood of one-day precipitation events averaged over an area encompassing Northern England and Southern Scotland using data and methods available immediately after the event occurred. The analysis was based on three independent methods of extreme event attribution: historical observed trends, coupled climate model simulations and a large ensemble of regional model simulations. All three methods agreed that the effect of climate change was positive, making precipitation events like this about 40% more likely, with a provisional 2.5%-97.5% confidence interval of 5%-80%. Here we revisit the assessment using more station data, an additional monthly event definition, a second global climate model and regional model simulations of winter 2015/16. The overall result of the analysis is similar to the real-time analysis with a best estimate of a 59% increase in event frequency, but a larger confidence interval that does include no change. It is important to highlight that the observational data in the additional monthly analysis does not only represent the rainfall associated with storm Desmond but also that of storms Eve and Frank occurring towards the end of the month.
Novel Screening Tool for Stroke Using Artificial Neural Network.
Abedi, Vida; Goyal, Nitin; Tsivgoulis, Georgios; Hosseinichimeh, Niyousha; Hontecillas, Raquel; Bassaganya-Riera, Josep; Elijovich, Lucas; Metter, Jeffrey E; Alexandrov, Anne W; Liebeskind, David S; Alexandrov, Andrei V; Zand, Ramin
2017-06-01
The timely diagnosis of stroke at the initial examination is extremely important given the disease morbidity and narrow time window for intervention. The goal of this study was to develop a supervised learning method to recognize acute cerebral ischemia (ACI) and differentiate that from stroke mimics in an emergency setting. Consecutive patients presenting to the emergency department with stroke-like symptoms, within 4.5 hours of symptoms onset, in 2 tertiary care stroke centers were randomized for inclusion in the model. We developed an artificial neural network (ANN) model. The learning algorithm was based on backpropagation. To validate the model, we used a 10-fold cross-validation method. A total of 260 patients (equal number of stroke mimics and ACIs) were enrolled for the development and validation of our ANN model. Our analysis indicated that the average sensitivity and specificity of ANN for the diagnosis of ACI based on the 10-fold cross-validation analysis was 80.0% (95% confidence interval, 71.8-86.3) and 86.2% (95% confidence interval, 78.7-91.4), respectively. The median precision of ANN for the diagnosis of ACI was 92% (95% confidence interval, 88.7-95.3). Our results show that ANN can be an effective tool for the recognition of ACI and differentiation of ACI from stroke mimics at the initial examination. © 2017 American Heart Association, Inc.
Choudhuri, Indrajit; MacCarter, Dean; Shaw, Rachael; Anderson, Steve; St Cyr, John; Niazi, Imran
2014-11-01
One-third of eligible patients fail to respond to cardiac resynchronization therapy (CRT). Current methods to "optimize" the atrio-ventricular (A-V) interval are performed at rest, which may limit its efficacy during daily activities. We hypothesized that low-intensity cardiopulmonary exercise testing (CPX) could identify the most favorable physiologic combination of specific gas exchange parameters reflecting pulmonary blood flow or cardiac output, stroke volume, and left atrial pressure to guide determination of the optimal A-V interval. We assessed relative feasibility of determining the optimal A-V interval by three methods in 17 patients who underwent optimization of CRT: (1) resting echocardiographic optimization (the Ritter method), (2) resting electrical optimization (intrinsic A-V interval and QRS duration), and (3) during low-intensity, steady-state CPX. Five sequential, incremental A-V intervals were programmed in each method. Assessment of cardiopulmonary stability and potential influence on the CPX-based method were assessed. CPX and determination of a physiological optimal A-V interval was successfully completed in 94.1% of patients, slightly higher than the resting echo-based approach (88.2%). There was a wide variation in the optimal A-V delay determined by each method. There was no observed cardiopulmonary instability or impact of the implant procedure that affected determination of the CPX-based optimized A-V interval. Determining optimized A-V intervals by CPX is feasible. Proposed mechanisms explaining this finding and long-term impact require further study. ©2014 Wiley Periodicals, Inc.
Nielsen, Merete Willemoes; Søndergaard, Birthe; Kjøller, Mette; Hansen, Ebba Holme
2008-09-01
This study compared national self-reported data on medicine use and national prescription records at the individual level. Data from the nationally representative Danish health survey conducted in 2000 (n=16,688) were linked at the individual level to national prescription records covering 1999-2000. Kappa statistics and 95% confidence intervals were calculated. Applying the legend time method to medicine groups used mainly on a chronic basis revealed good to very good agreement between the two data sources, whereas medicines used as needed showed fair to moderate agreement. When a fixed-time window was applied for analysis, agreement was unchanged for medicines used mainly on a chronic basis, whereas agreement increased somewhat compared to the legend time method when analyzing medicines used as needed. Agreement between national self-reported data and national prescription records differed according to method of analysis and therapeutic group. A fixed-time window is an appropriate method of analysis for most therapeutic groups.
Lv, Ying; Huang, Guohe; Sun, Wei
2013-01-01
A scenario-based interval two-phase fuzzy programming (SITF) method was developed for water resources planning in a wetland ecosystem. The SITF approach incorporates two-phase fuzzy programming, interval mathematical programming, and scenario analysis within a general framework. It can tackle fuzzy and interval uncertainties in terms of cost coefficients, resources availabilities, water demands, hydrological conditions and other parameters within a multi-source supply and multi-sector consumption context. The SITF method has the advantage in effectively improving the membership degrees of the system objective and all fuzzy constraints, so that both higher satisfactory grade of the objective and more efficient utilization of system resources can be guaranteed. Under the systematic consideration of water demands by the ecosystem, the SITF method was successfully applied to Baiyangdian Lake, which is the largest wetland in North China. Multi-source supplies (including the inter-basin water sources of Yuecheng Reservoir and Yellow River), and multiple water users (including agricultural, industrial and domestic sectors) were taken into account. The results indicated that, the SITF approach would generate useful solutions to identify long-term water allocation and transfer schemes under multiple economic, environmental, ecological, and system-security targets. It can address a comparative analysis for the system satisfactory degrees of decisions under various policy scenarios. Moreover, it is of significance to quantify the relationship between hydrological change and human activities, such that a scheme on ecologically sustainable water supply to Baiyangdian Lake can be achieved. Copyright © 2012 Elsevier B.V. All rights reserved.
A refined method for multivariate meta-analysis and meta-regression
Jackson, Daniel; Riley, Richard D
2014-01-01
Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351
Ye, Jun
2016-01-01
An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.
Predicting motion sickness during parabolic flight
NASA Technical Reports Server (NTRS)
Harm, Deborah L.; Schlegel, Todd T.
2002-01-01
BACKGROUND: There are large individual differences in susceptibility to motion sickness. Attempts to predict who will become motion sick have had limited success. In the present study, we examined gender differences in resting levels of salivary amylase and total protein, cardiac interbeat intervals (R-R intervals), and a sympathovagal index and evaluated their potential to correctly classify individuals into two motion sickness severity groups. METHODS: Sixteen subjects (10 men and 6 women) flew four sets of 10 parabolas aboard NASA's KC-135 aircraft. Saliva samples for amylase and total protein were collected preflight on the day of the flight and motion sickness symptoms were recorded during each parabola. Cardiovascular parameters were collected in the supine position 1-5 days before the flight. RESULTS: There were no significant gender differences in sickness severity or any of the other variables mentioned above. Discriminant analysis using salivary amylase, R-R intervals and the sympathovagal index produced a significant Wilks' lambda coefficient of 0.36, p=0.006. The analysis correctly classified 87% of the subjects into the none-mild sickness or the moderate-severe sickness group. CONCLUSIONS: The linear combination of resting levels of salivary amylase, high-frequency R-R interval levels, and a sympathovagal index may be useful in predicting motion sickness severity.
Predicting Motion Sickness During Parabolic Flight
NASA Technical Reports Server (NTRS)
Harm, Deborah L.; Schlegel, Todd T.
2002-01-01
Background: There are large individual differences in susceptibility to motion sickness. Attempts to predict who will become motion sick have had limited success. In the present study we examined gender differences in resting levels of salivary amylase and total protein, cardiac interbeat intervals (R-R intervals), and a sympathovagal index and evaluated their potential to correctly classify individuals into two motion sickness severity groups. Methods: Sixteen subjects (10 men and 6 women) flew 4 sets of 10 parabolas aboard NASA's KC-135 aircraft. Saliva samples for amylase and total protein were collected preflight on the day of the flight and motion sickness symptoms were recorded during each parabola. Cardiovascular parameters were collected in the supine position 1-5 days prior to the flight. Results: There were no significant gender differences in sickness severity or any of the other variables mentioned above. Discriminant analysis using salivary amylase, R-R intervals and the sympathovagal index produced a significant Wilks' lambda coefficient of 0.36, p= 0.006. The analysis correctly classified 87% of the subjects into the none-mild sickness or the moderate-severe sickness group. Conclusions: The linear combination of resting levels of salivary amylase, high frequency R-R interval levels, and a sympathovagal index may be useful in predicting motion sickness severity.
Modelling and regulating of cardio-respiratory response for the enhancement of interval training
2014-01-01
Background The interval training method has been a well known exercise protocol which helps strengthen and improve one’s cardiovascular fitness. Purpose To develop an effective training protocol to improve cardiovascular fitness based on modelling and analysis of Heart Rate (HR) and Oxygen Uptake (VO2) dynamics. Methods In order to model the cardiorespiratory response to the onset and offset exercises, the (K4b2, Cosmed) gas analyzer was used to monitor and record the heart rate and oxygen uptake for ten healthy male subjects. An interval training protocol was developed for young health users and was simulated using a proposed RC switching model which was presented to accommodate the variations of the cardiorespiratory dynamics to running exercises. A hybrid system model was presented to describe the adaptation process and a multi-loop PI control scheme was designed for the tuning of interval training regime. Results By observing the original data for each subject, we can clearly identify that all subjects have similar HR and VO2 profiles. The proposed model is capable to simulate the exercise responses during onset and offset exercises; it ensures the continuity of the outputs within the interval training protocol. Under some mild assumptions, a hybrid system model can describe the adaption process and accordingly a multi-loop PI controller can be designed for the tuning of interval training protocol. The self-adaption feature of the proposed controller gives the exerciser the opportunity to reach his desired setpoints after a certain number of training sessions. Conclusions The established interval training protocol targets a range of 70-80% of HRmax which is mainly a training zone for the purpose of cardiovascular system development and improvement. Furthermore, the proposed multi-loop feedback controller has the potential to tune the interval training protocol according to the feedback from an individual exerciser. PMID:24499131
A systematic review of psychosocial interventions for women with postpartum stress.
Song, Ju-Eun; Kim, Tiffany; Ahn, Jeong-Ah
2015-01-01
To analyze the effects of psychosocial interventions with the aim of reducing the intensity of stress in mothers during the postpartum period as compared with usual care. Eligible studies were identified by searching MEDLINE, EMBASE, CINAHL, and ProQuest dissertations and theses. Randomized controlled trials (RCTs) treating stress in postpartum mothers older than age 19 years were included. The suitability of the quality of articles was evaluated using Joanna Briggs Institute's Critical Appraisal Checklist for Experimental Studies. Fourteen articles met the inclusion criteria for data analysis. Authors, country, sample, setting, methods, time period, major content of the intervention, outcome measures, and salient findings were extracted and summarized in a data extraction form for further analysis and synthesis. Standardized mean differences with 95% confidence intervals were calculated for 13 suitable articles using Cochrane Review Manager. Of 1,871 publications, 14 RCTs, conducted between 1994 and 2012, were evaluated in the systematic review and 13 studies were included in the meta-analysis. Studies were categorized into three major types by interventional methods. We found that psychosocial interventions in general (standard mean difference -1.66, 95% confidence interval [-2.74, -0.57], p = .003), and supportive stress management programs in particular (standard mean difference -0.59, 95% confidence interval [-0.94, -0.23], p = .001), were effective for women dealing with postpartum stress. This review indicated that psychosocial interventions including supportive stress management programs are effective for reducing postpartum stress in women, so those interventions should become an essential part of maternity care. © 2015 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.
Design, analysis, and interpretation of field quality-control data for water-sampling projects
Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.
2015-01-01
The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
Noordam, Raymond; Sitlani, Colleen M; Avery, Christy L; Stewart, James D; Gogarten, Stephanie M; Wiggins, Kerri L; Trompet, Stella; Warren, Helen R; Sun, Fangui; Evans, Daniel S; Li, Xiaohui; Li, Jin; Smith, Albert V; Bis, Joshua C; Brody, Jennifer A; Busch, Evan L; Caulfield, Mark J; Chen, Yii-Der I; Cummings, Steven R; Cupples, L Adrienne; Duan, Qing; Franco, Oscar H; Méndez-Giráldez, Rául; Harris, Tamara B; Heckbert, Susan R; van Heemst, Diana; Hofman, Albert; Floyd, James S; Kors, Jan A; Launer, Lenore J; Li, Yun; Li-Gao, Ruifang; Lange, Leslie A; Lin, Henry J; de Mutsert, Renée; Napier, Melanie D; Newton-Cheh, Christopher; Poulter, Neil; Reiner, Alexander P; Rice, Kenneth M; Roach, Jeffrey; Rodriguez, Carlos J; Rosendaal, Frits R; Sattar, Naveed; Sever, Peter; Seyerle, Amanda A; Slagboom, P Eline; Soliman, Elsayed Z; Sotoodehnia, Nona; Stott, David J; Stürmer, Til; Taylor, Kent D; Thornton, Timothy A; Uitterlinden, André G; Wilhelmsen, Kirk C; Wilson, James G; Gudnason, Vilmundur; Jukema, J Wouter; Laurie, Cathy C; Liu, Yongmei; Mook-Kanamori, Dennis O; Munroe, Patricia B; Rotter, Jerome I; Vasan, Ramachandran S; Psaty, Bruce M; Stricker, Bruno H; Whitsel, Eric A
2017-01-01
Background Increased heart rate and a prolonged QT interval are important risk factors for cardiovascular morbidity and mortality, and can be influenced by the use of various medications, including tri/tetracyclic antidepressants (TCAs). We aim to identify genetic loci that modify the association between TCA use and RR and QT intervals. Methods and Results We conducted race/ethnic-specific genome-wide interaction analyses (with HapMap Phase II imputed reference panel imputation) of TCAs and resting RR and QT intervals in cohorts of European (n=45,706; n=1,417 TCA users), African (n=10,235; n=296 TCA users) and Hispanic/Latino (n=13,808; n=147 TCA users) ancestry, adjusted for clinical covariates. Among the populations of European ancestry, two genome-wide significant loci were identified for RR interval: rs6737205 in BRE (β = 56.3, Pinteraction = 3.9e−9) and rs9830388 in UBE2E2 (β = 25.2, Pinteraction = 1.7e−8). In Hispanic/Latino cohorts, rs2291477 in TGFBR3 significantly modified the association between TCAs and QT intervals (β = 9.3, Pinteraction = 2.55e−8). In the meta-analyses of the other ethnicities, these loci either were excluded from the meta-analyses (as part of quality control), or their effects did not reach the level of nominal statistical significance (Pinteraction > 0.05). No new variants were identified in these ethnicities. No additional loci were identified after inverse-variance-weighted meta-analysis of the three ancestries. Conclusion Among Europeans, TCA interactions with variants in BRE and UBE2E2, were identified in relation to RR intervals. Among Hispanic/Latinos, variants in TGFBR3 modified the relation between TCAs and QT intervals. Future studies are required to confirm our results. PMID:28039329
Fundamental relations between short-term RR interval and arterial pressure oscillations in humans
NASA Technical Reports Server (NTRS)
Taylor, J. A.; Eckberg, D. L.
1996-01-01
BACKGROUND: One of the principal explanations for respiratory sinus arrhythmia is that it reflects arterial baroreflex buffering of respiration-induced arterial pressure fluctuations. If this explanation is correct, then elimination of RR interval fluctuations should increase respiratory arterial pressure fluctuations. METHODS AND RESULTS: We measured RR interval and arterial pressure fluctuations during normal sinus rhythm and fixed-rate atrial pacing at 17.2+/-1.8 (SEM) beats per minute greater than the sinus rate in 16 healthy men and 4 healthy women, 20 to 34 years of age. Measurements were made during controlled-frequency breathing (15 breaths per minute or 0.25 Hz) with subjects in the supine and 40 degree head-up tilt positions. We characterized RR interval and arterial pressure variabilities in low-frequency (0.05 to 0.15 Hz) and respiratory-frequency (0.20 to 0.30 Hz) ranges with fast Fourier transform power spectra and used cross-spectral analysis to determine the phase relation between the two signals. As expected, cardiac pacing eliminated beat-to-beat RR interval variability. Against expectations, however, cardiac pacing in the supine position significantly reduced arterial pressure oscillations in the respiratory frequency (systolic, 6.8+/-1.8 to 2.9 +/-0.6 mm Hg2/Hz, P=.017). In contrast, cardiac pacing in the 40 degree tilt position increased arterial pressure variability (systolic, 8.0+/-1.8 to 10.8 +/-2.6, P=.027). Cross-spectral analysis showed that 40 degree tilt shifted the phase relation between systolic pressure and RR interval at the respiratory frequency from positive to negative (9 +/-7 degrees versus -17+/-11 degrees, P=.04); that is, in the supine position, RR interval changes appeared to lead arterial pressure changes, and in the upright position, RR interval changes appeared to follow arterial pressure changes. CONCLUSIONS: These results demonstrate that respiratory sinus arrhythmia can actually contribute to respiratory arterial pressure fluctuations. Therefore, respiratory sinus arrhythmia does not represent simple baroreflex buffering of arterial pressure.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Solving the interval type-2 fuzzy polynomial equation using the ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim
2014-07-01
Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.
A statistical note on the redundancy of nine standard baroreflex parameters
NASA Technical Reports Server (NTRS)
Ludwig, David A.; Convertino, Victor A.
1991-01-01
An accepted method for measuring the responsiveness of the carotid-cardiac baroreflex to arterial pressure changes is to artificially stimulate the baroreceptors in the neck with a pressurized neck chamber. Nine physiological responses to this type of stimulation are quantified and used as indicators of the baroreflex response function. Thirty male humans between the ages of 27 and 46 underwent the carotid-cardiac baroreflex test. The data for the nine response parameters were analyzed by principle component factor analysis. The results indicated that 92.5 percent of the total variance across all nine parameters could be explained in four dimensions. The first two dimensions reflected location points for R-R interval and carotid distending pressure, respectively. The third factor was composed of measures reflecting the gain (responsiveness) of the reflex. The fourth dimension was the ratio of baseline R-R interval to the maximal R-R interval response during simulated hypertension. The data suggest that the analysis of all nine baroreflex parameters is likely to be redundant and researchers should account for these redundancies either in their analyses or conclusions.
Brownstein, Daniel J; Salagre, Estela; Köhler, Cristiano; Stubbs, Brendon; Vian, João; Pereira, Ciria; Chavarria, Victor; Karmakar, Chandan; Turner, Alyna; Quevedo, João; Carvalho, André F; Berk, Michael; Fernandes, Brisa S
2018-01-01
It is unclear whether blockade of the angiotensin system has effects on mental health. Our objective was to determine the impact of angiotensin converting enzyme inhibitors and angiotensin II type 1 receptor (AT1R) blockers on mental health domain of quality of life. Meta-analysis of published literature. PubMed and clinicaltrials.gov databases. The last search was conducted in January 2017. Randomized controlled trials comparing any angiotensin converting enzyme inhibitor or AT1R blocker versus placebo or non-angiotensin converting enzyme inhibitor or non-AT1R blocker were selected. Study participants were adults without any major physical symptoms. We adhered to meta-analysis reporting methods as per PRISMA and the Cochrane Collaboration. Eleven studies were included in the analysis. When compared with placebo or other antihypertensive medications, AT1R blockers and angiotensin converting enzyme inhibitors were associated with improved overall quality of life (standard mean difference = 0.11, 95% confidence interval = [0.08, 0.14], p < 0.0001), positive wellbeing (standard mean difference = 0.11, 95% confidence interval = [0.05, 0.17], p < 0.0001), mental (standard mean difference = 0.15, 95% confidence interval = [0.06, 0.25], p < 0.0001), and anxiety (standard mean difference = 0.08, 95% confidence interval = [0.01, 0.16], p < 0.0001) domains of QoL. No significant difference was found for the depression domain (standard mean difference = 0.05, 95% confidence interval = [0.02, 0.12], p = 0.15). Use of angiotensin blockers and inhibitors for the treatment of hypertension in otherwise healthy adults is associated with improved mental health domains of quality of life. Mental health quality of life was a secondary outcome in the included studies. Research specifically designed to analyse the usefulness of drugs that block the angiotensin system is necessary to properly evaluate this novel psychiatric target.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, Samuel S.; Beck, Chelsie L.; Bowen, James M.
Environmental tungsten (W) analyses are inhibited by a lack of reference materials and practical methods to remove isobaric and radiometric interferences. We present a method that evaluates the potential use of commercially available sediment, Basalt Columbia River-2 (BCR-2), as a reference material using neutron activation analysis (NAA) and mass spectrometry. Tungsten concentrations using both methods are in statistical agreement at the 95% confidence interval (92 ± 4 ng/g for NAA and 100 ±7 ng/g for mass spectrometry) with recoveries greater than 95%. These results indicate that BCR-2 may be suitable as a reference material for future studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
Yan, Cunling; Hu, Jian; Yang, Jia; Chen, Zhaoyun; Li, Huijun; Wei, Lianhua; Zhang, Wei; Xing, Hao; Sang, Guoyao; Wang, Xiaoqin; Han, Ruilin; Liu, Ping; Li, Zhihui; Li, Zhiyan; Huang, Ying; Jiang, Li; Li, Shunjun; Dai, Shuyang; Wang, Nianyue; Yang, Yongfeng; Ma, Li; Soh, Andrew; Beshiri, Agim; Shen, Feng; Yang, Tian; Fan, Zhuping; Zheng, Yijie; Chen, Wei
2018-04-01
Protein induced by vitamin K absence or antagonist-II (PIVKA-II) has been widely used as a biomarker for liver cancer diagnosis in Japan for decades. However, the reference intervals for serum ARCHITECT PIVKA-II have not been established in the Chinese population. Thus, this study aimed to measure serum PIVKA-II levels in healthy Chinese subjects. This is a sub-analysis from the prospective, cross-sectional and multicenter study (ClinicalTrials.gov Identifier: NCT03047603). A total of 892 healthy participants (777 Han and 115 Uygur) with complete health checkup results were recruited from 7 regional centers in China. Serum PIVKA-II level was measured by ARCHITECT immunoassay. All 95% reference ranges were estimated by nonparametric method. The distribution of PIVKA-II values showed significant difference with ethnicity and sex, but not age. The 95% reference range of PIVKA-II was 13.62-40.38 mAU/ml in Han Chinese subjects and 15.16-53.74 mAU/ml in Uygur subjects. PIVKA-II level was significantly higher in males than in females (P < 0.001). The 95% reference range of PIVKA-II was 15.39-42.01 mAU/ml in Han males while 11.96-39.13 mAU/ml in Han females. The reference interval of serum PIVKA-II on the Architect platform was established in healthy Chinese adults. This will be valuable for future clinical and laboratory studies performed using the Architect analyzer. Different ethnic backgrounds and analytical methods underline the need for redefining the reference interval of analytes such as PIVKA-II, in central laboratories in different countries. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, B.; Miller, L.F.; Sparks, R.B.
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less
Stochastic flux analysis of chemical reaction networks
2013-01-01
Background Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. Results We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. Conclusions We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network. PMID:24314153
Stochastic flux analysis of chemical reaction networks.
Kahramanoğulları, Ozan; Lynch, James F
2013-12-07
Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network.
Li, Rongxia; Stewart, Brock; Weintraub, Eric
2016-01-01
The self-controlled case series (SCCS) and self-controlled risk interval (SCRI) designs have recently become widely used in the field of post-licensure vaccine safety monitoring to detect potential elevated risks of adverse events following vaccinations. The SCRI design can be viewed as a subset of the SCCS method in that a reduced comparison time window is used for the analysis. Compared to the SCCS method, the SCRI design has less statistical power due to fewer events occurring in the shorter control interval. In this study, we derived the asymptotic relative efficiency (ARE) between these two methods to quantify this loss in power in the SCRI design. The equation is formulated as [Formula: see text] (a: control window-length ratio between SCRI and SCCS designs; b: ratio of risk window length and control window length in the SCCS design; and [Formula: see text]: relative risk of exposed window to control window). According to this equation, the relative efficiency declines as the ratio of control-period length between SCRI and SCCS methods decreases, or with an increase in the relative risk [Formula: see text]. We provide an example utilizing data from the Vaccine Safety Datalink (VSD) to study the potential elevated risk of febrile seizure following seasonal influenza vaccine in the 2010-2011 season.
NASA Astrophysics Data System (ADS)
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement.
Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi
2015-03-01
Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Photographs were found to have high reliability coefficient (P > 0.05). The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Dietary Inflammatory Potential Score and Risk of Breast Cancer: Systematic Review and Meta-analysis.
Zahedi, Hoda; Djalalinia, Shirin; Sadeghi, Omid; Asayesh, Hamid; Noroozi, Mehdi; Gorabi, Armita Mahdavi; Mohammadi, Rasool; Qorbani, Mostafa
2018-02-07
Several studies have been conducted on the relationship between dietary inflammatory potential (DIP) and breast cancer. However, the findings are conflicting. This systematic review and meta-analysis summarizes the findings on the association between DIP and the risk of breast cancer. We used relevant keywords and searched online international electronic databases, including PubMed and NLM Gateway (for Medline), Institute for Scientific Information (ISI), and Scopus for articles published through February 2017. All cross-sectional, case-control, and cohort studies were included in this meta-analysis. Meta-analysis was performed using the random effects meta-analysis method to address heterogeneity among studies. Findings were analyzed statistically. Nine studies were included in the present systematic review and meta-analysis. The total sample size of these studies was 296,102, and the number of participants varied from 1453 to 122,788. The random effects meta-analysis showed a positive and significant association between DIP and the risk of breast cancer (pooled odds ratio, 1.14; 95% confidence interval, 1.01-1.27). The pooled effect size was not statistically significant because of the type of studies, including cohort (pooled relative risk, 1.04; 95% confidence interval, 0.98-1.10) and case-control (pooled odds ratio, 1.63; 95% confidence interval, 0.89-2.37) studies. We found a significant and positive association between higher DIP score and risk of breast cancer. Modifying inflammatory characteristics of diet can substantially reduce the risk of breast cancer. Copyright © 2018 Elsevier Inc. All rights reserved.
Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha
2010-01-01
Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497
LADABAUM, URI; FIORITTO, ANN; MITANI, AYA; DESAI, MANISHA; KIM, JANE P.; REX, DOUGLAS K.; IMPERIALE, THOMAS; GUNARATNAM, NARESH
2017-01-01
BACKGROUND & AIMS Accurate optical analysis of colorectal polyps (optical biopsy) could prevent unnecessary polypectomies or allow a “resect and discard” strategy with surveillance intervals determined based on the results of the optical biopsy; this could be less expensive than histopathologic analysis of polyps. We prospectively evaluated real-time optical biopsy analysis of polyps with narrow band imaging (NBI) by community-based gastroenterologists. METHODS We first analyzed a computerized module to train gastroenterologists (N = 13) in optical biopsy skills using photographs of polyps. Then we evaluated a practice-based learning program for these gastroenterologists (n = 12) that included real-time optical analysis of polyps in vivo, comparison of optical biopsy predictions to histopathologic analysis, and ongoing feedback on performance. RESULTS Twelve of 13 subjects identified adenomas with >90% accuracy at the end of the computer study, and 3 of 12 subjects did so with accuracy ≥90% in the in vivo study. Learning curves showed considerable variation among batches of polyps. For diminutive rectosigmoid polyps assessed with high confidence at the end of the study, adenomas were identified with mean (95% confidence interval [CI]) accuracy, sensitivity, specificity, and negative predictive values of 81% (73%–89%), 85% (74%–96%), 78% (66%–92%), and 91% (86%–97%), respectively. The adjusted odds ratio for high confidence as a predictor of accuracy was 1.8 (95% CI, 1.3–2.5). The agreement between surveillance recommendations informed by high-confidence NBI analysis of diminutive polyps and results from histopathologic analysis of all polyps was 80% (95% CI, 77%–82%). CONCLUSIONS In an evaluation of real-time optical biopsy analysis of polyps with NBI, only 25% of gastroenterologists assessed polyps with ≥90% accuracy. The negative predictive value for identification of adenomas, but not the surveillance interval agreement, met the American Society for Gastrointestinal Endoscopy–recommended thresholds for optical biopsy. Better results in community practice must be achieved before NBI-based optical biopsy methods can be used routinely to evaluate polyps; ClinicalTrials.gov number, NCT01638091. PMID:23041328
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed
NASA Astrophysics Data System (ADS)
Elishakoff, I.
2013-10-01
Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.
Kwon, Younghoon; Koene, Ryan J.; Kwon, Osung; Kealhofer, Jessica V.; Adabag, Selcuk; Duval, Sue
2017-01-01
Background Patients with heart failure and reduced ejection fraction are at increased risk of malignant ventricular arrhythmias. Implantable cardioverter-defibrillator (ICD) is recommended to prevent sudden cardiac death in some of these patients. Sleep-disordered breathing (SDB) is highly prevalent in this population and may impact arrhythmogenicity. We performed a systematic review and meta-analysis of prospective studies that assessed the impact of SDB on ICD therapy. Methods and Results Relevant prospective studies were identified in the Ovid MEDLINE, EMBASE, and Google Scholar databases. Weighted risk ratios of the association between SDB and appropriate ICD therapies were estimated using random effects meta-analysis. Nine prospective cohort studies (n=1274) were included in this analysis. SDB was present in 52% of the participants. SDB was associated with a 55% higher risk of appropriate ICD therapies (45% versus 28%; risk ratio, 1.55; 95% confidence interval, 1.32–1.83). In a subgroup analysis based on the subtypes of SDB, the risk was higher in both central (risk ratio, 1.50; 95% confidence interval, 1.11–2.02) and obstructive (risk ratio, 1.43; 95% confidence interval, 1.01–2.03) sleep apnea. Conclusions SDB is associated with an increased risk of appropriate ICD therapy in patients with heart failure and reduced ejection fraction. PMID:28213507
NASA Astrophysics Data System (ADS)
Lausch, Anthony; Chen, Jeff; Ward, Aaron D.; Gaede, Stewart; Lee, Ting-Yim; Wong, Eugene
2014-11-01
Parametric response map (PRM) analysis is a voxel-wise technique for predicting overall treatment outcome, which shows promise as a tool for guiding personalized locally adaptive radiotherapy (RT). However, image registration error (IRE) introduces uncertainty into this analysis which may limit its use for guiding RT. Here we extend the PRM method to include an IRE-related PRM analysis confidence interval and also incorporate multiple graded classification thresholds to facilitate visualization. A Gaussian IRE model was used to compute an expected value and confidence interval for PRM analysis. The augmented PRM (A-PRM) was evaluated using CT-perfusion functional image data from patients treated with RT for glioma and hepatocellular carcinoma. Known rigid IREs were simulated by applying one thousand different rigid transformations to each image set. PRM and A-PRM analyses of the transformed images were then compared to analyses of the original images (ground truth) in order to investigate the two methods in the presence of controlled IRE. The A-PRM was shown to help visualize and quantify IRE-related analysis uncertainty. The use of multiple graded classification thresholds also provided additional contextual information which could be useful for visually identifying adaptive RT targets (e.g. sub-volume boosts). The A-PRM should facilitate reliable PRM guided adaptive RT by allowing the user to identify if a patient’s unique IRE-related PRM analysis uncertainty has the potential to influence target delineation.
Induction treatments for acute promyelocytic leukemia: a network meta-analysis
Zhang, Qiaoxia; Lou, Jin; Cai, Yun; Chen, Weihong; Du, Xin
2016-01-01
Background 9 treatments for acute promyelocytic leukemia (APL) have been compared in many randomized controlled trials (RCT). The conclusions have been inconsistent and the purpose of this study is to conduct a network meta-analysis. Results Rankings of event-free survival are ATRA+RIF (81.2%), ATRA+ATO (69.6%), ATO (50.6%). Rankings of complete remission are ATRA+RIF (79.3%), ATRA+ATO (64.8%), RIF (60.3%), ATO (55.9%). Rankings of avoiding differentiation syndromes are CT (84.3%), ATO (80.3%), RIF (71.6%), ATRA+RIF (49%), ATRA+ATO (40.8%). Methods A total of 1,666 patients from 12 RCTs were enrolled. The frequentist method was used. Relative risks with 95% confidence intervals were calculated. We produced a network plot, a contribution plot, and a forest plot predictive intervals. The inconsistency factor, the surface under the cumulative ranking curve and the publication bias were evaluated. Conclusions ATRA+ATO is eligible to be the first-line treatment for APL. ATRA+RIF is a prospective alternative to the first-line treatment. RIF or ATO should be reconsidered as another option for de novo APL. PMID:27713127
NASA Astrophysics Data System (ADS)
Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi
2018-04-01
Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.
Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi
2018-03-13
Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models' performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.
Brorby, G P; Sheehan, P J; Berman, D W; Bogen, K T; Holm, S E
2011-05-01
Airborne samples collected in the 1970s for drywall workers using asbestos-containing joint compounds were likely prepared and analyzed according to National Institute of Occupational Safety and Health Method P&CAM 239, the historical precursor to current Method 7400. Experimentation with a re-created, chrysotile-containing, carbonate-based joint compound suggested that analysis following sample preparation by the historical vs. current method produces different fiber counts, likely because of an interaction between the different clearing and mounting chemicals used and the carbonate-based joint compound matrix. Differences were also observed during analysis using Method 7402, depending on whether acetic acid/dimethylformamide or acetone was used during preparation to collapse the filter. Specifically, air samples of sanded chrysotile-containing joint compound prepared by the historical method yielded fiber counts significantly greater (average of 1.7-fold, 95% confidence interval: 1.5- to 2.0-fold) than those obtained by the current method. In addition, air samples prepared by Method 7402 using acetic acid/dimethylformamide yielded fiber counts that were greater (2.8-fold, 95% confidence interval: 2.5- to 3.2-fold) than those prepared by this method using acetone. These results indicated (1) there is an interaction between Method P&CAM 239 preparation chemicals and the carbonate-based joint compound matrix that reveals fibers that were previously bound in the matrix, and (2) the same appeared to be true for Method 7402 preparation chemicals acetic acid/dimethylformamide. This difference in fiber counts is the opposite of what has been reported historically for samples of relatively pure chrysotile dusts prepared using the same chemicals. This preparation artifact should be considered when interpreting historical air samples for drywall workers prepared by Method P&CAM 239. Copyright © 2011 JOEH, LLC
Green, Cynthia L.; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W.
2013-01-01
Background The Cardiac Safety Research Consortium (CSRC) provides both “learning” and blinded “testing” digital ECG datasets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This manuscript reports the first results from a blinded “testing” dataset that examines Developer re-analysis of original Sponsor-reported core laboratory data. Methods 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 191 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Results Developer and Sponsor-reported baseline-adjusted data were similar with average differences less than 1 millisecond (ms) for all intervals. Both Developer and Sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject standard deviation for triplicate QTcF measurements was significantly lower for Developer than Sponsor-reported data (5.4 ms and 7.2 ms, respectively; p<0.001). Conclusion The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared to the Sponsor-reported study, without the use of a manual core laboratory. These findings indicate CSRC ECG datasets can be useful for evaluating novel methods and algorithms for determining QT/QTc prolongation by drugs. While the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. PMID:22424006
Zhang, Zhen; Shang, Haihong; Shi, Yuzhen; Huang, Long; Li, Junwen; Ge, Qun; Gong, Juwu; Liu, Aiying; Chen, Tingting; Wang, Dan; Wang, Yanling; Palanga, Koffi Kibalou; Muhammad, Jamshed; Li, Weijie; Lu, Quanwei; Deng, Xiaoying; Tan, Yunna; Song, Weiwu; Cai, Juan; Li, Pengtao; Rashid, Harun or; Gong, Wankui; Yuan, Youlu
2016-04-11
Upland Cotton (Gossypium hirsutum) is one of the most important worldwide crops it provides natural high-quality fiber for the industrial production and everyday use. Next-generation sequencing is a powerful method to identify single nucleotide polymorphism markers on a large scale for the construction of a high-density genetic map for quantitative trait loci mapping. In this research, a recombinant inbred lines population developed from two upland cotton cultivars 0-153 and sGK9708 was used to construct a high-density genetic map through the specific locus amplified fragment sequencing method. The high-density genetic map harbored 5521 single nucleotide polymorphism markers which covered a total distance of 3259.37 cM with an average marker interval of 0.78 cM without gaps larger than 10 cM. In total 18 quantitative trait loci of boll weight were identified as stable quantitative trait loci and were detected in at least three out of 11 environments and explained 4.15-16.70 % of the observed phenotypic variation. In total, 344 candidate genes were identified within the confidence intervals of these stable quantitative trait loci based on the cotton genome sequence. These genes were categorized based on their function through gene ontology analysis, Kyoto Encyclopedia of Genes and Genomes analysis and eukaryotic orthologous groups analysis. This research reported the first high-density genetic map for Upland Cotton (Gossypium hirsutum) with a recombinant inbred line population using single nucleotide polymorphism markers developed by specific locus amplified fragment sequencing. We also identified quantitative trait loci of boll weight across 11 environments and identified candidate genes within the quantitative trait loci confidence intervals. The results of this research would provide useful information for the next-step work including fine mapping, gene functional analysis, pyramiding breeding of functional genes as well as marker-assisted selection.
An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Choi, Suk; Lee, Kang Seog
2016-04-01
A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
HYSEP: A Computer Program for Streamflow Hydrograph Separation and Analysis
Sloto, Ronald A.; Crouse, Michele Y.
1996-01-01
HYSEP is a computer program that can be used to separate a streamflow hydrograph into base-flow and surface-runoff components. The base-flow component has traditionally been associated with ground-water discharge and the surface-runoff component with precipitation that enters the stream as overland runoff. HYSEP includes three methods of hydrograph separation that are referred to in the literature as the fixed interval, sliding-interval, and local-minimum methods. The program also describes the frequency and duration of measured streamflow and computed base flow and surface runoff. Daily mean stream discharge is used as input to the program in either an American Standard Code for Information Interchange (ASCII) or binary format. Output from the program includes table,s graphs, and data files. Graphical output may be plotted on the computer screen or output to a printer, plotter, or metafile.
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep
2017-01-01
Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107
Benomar, A; Krols, L; Stevanin, G; Cancel, G; LeGuern, E; David, G; Ouhabi, H; Martin, J J; Dürr, A; Zaim, A
1995-05-01
Autosomal dominant cerebellar ataxia with pigmentary macular dystrophy (ADCA type II) is a rare neurodegenerative disorder with marked anticipation. We have mapped the ADCA type II locus to chromosome 3 by linkage analysis in a genome-wide search and found no evidence for genetic heterogeneity among four families of different geographic origins. Haplotype reconstruction initially restricted the locus to the 33 cM interval flanked by D3S1300 and D3S1276 located at 3p12-p21.1. Combined multipoint analysis, using the Zmax-1 method, further reduced the candidate interval to an 8 cM region around D3S1285. Our results show that ADCA type II is a genetically homogenous disorder, independent of the heterogeneous group of type I cerebellar ataxias.
2017-01-05
AFRL-AFOSR-JP-TR-2017-0002 Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure Manabu...Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report for the project titled ’Advanced Computational Methods for Optimization of
Casey, R; Griffin, T P; Wall, D; Dennedy, M C; Bell, M; O'Shea, P M
2017-01-01
Background The Endocrine Society Clinical Practice Guideline on Phaeochomocytoma and Paraganglioma recommends phlebotomy for plasma-free metanephrines with patients fasted and supine using appropriately defined reference intervals. Studies have shown higher diagnostic sensitivities using these criteria. Further, with seated-sampling protocols, for result interpretation, reference intervals that do not compromise diagnostic sensitivity should be employed. Objective To determine the impact on diagnostic performance and financial cost of using supine reference intervals for result interpretation with our current plasma-free metanephrines fasted/seated-sampling protocol. Methods We conducted a retrospective cohort study of patients who underwent screening for PPGL using plasma-free metanephrines from 2009 to 2014 at Galway University Hospitals. Plasma-free metanephrines were measured using liquid chromatography-tandem mass spectrometry. Supine thresholds for plasma normetanephrine and metanephrine set at 610 pmol/L and 310 pmol/L, respectively, were used. Results A total of 183 patients were evaluated. Mean age of participants was 53.4 (±16.3) years. Five of 183 (2.7%) patients had histologically confirmed PPGL (males, n=4). Using seated reference intervals for plasma-free metanephrines, diagnostic sensitivity and specificity were 100% and 98.9%, respectively, with two false-positive cases. Application of reference intervals established in subjects supine and fasted to this cohort gave diagnostic sensitivity of 100% with specificity of 74.7%. Financial analysis of each pretesting strategy demonstrated cost-equivalence (€147.27/patient). Conclusion Our cost analysis, together with the evidence that fasted/supine-sampling for plasma-free metanephrines, offers more reliable exclusion of PPGL mandates changing our current practice. This study highlights the important advantages of standardized diagnostic protocols for plasma-free metanephrines to ensure the highest diagnostic accuracy for investigation of PPGL.
Lianhui, Yang; Meifei, Lian; Zhongyue, Hu; Yunzhi, Feng
2017-08-01
Objective The aim of this study is to evaluate the relationship between periodontitis and hyperlipidemia risks through Meta-analysis. Methods Two researchers conducted an electronic search on PubMed, Cochrane Library, Embase, CBM, CNKI, Wanfang and VIP databases established until July 2016 for observational studies on the association between periodontitis and hyperlipidemia. The language used was limited to Chinese and English. After data extraction and quality evaluation of included trials, Meta-analysis was conducted using the RevMan 5.3 software. The GRADE 3.6 software was used to evaluate the quality level of the evidence. Results Six case-control studies and one cohort study were included. The results of Meta-analysis showed that serum triglyceride (TG) in patients with periodontitis was significantly higher than that of the periodontal health group (MD=50.50, 95% confidence interval=39.57-61.42, P<0.000 01), as well as serum total cholesterol (TC) (MD=17.54, 95% confidence interval=10.91-24.18, P<0.000 01). Furthermore, the risks of TG and TC in the serum of patients with chronic periodontitis were 4.73 times (OR=4.73, 95% confidence interval=2.74-8.17, P<0.000 01) and 3.62 times (OR=3.62, 95% confidence interval=2.18-6.03, P<0.000 01) of that of periodontal healthy patients. No significant difference was observed between the group with high-density lipoprotein cholesterol (HDL-C) and that with low density lipoprotein cholesterol (LDL-C). Conclusion Current evidence indicates that a correlation exists between chronic periodontitis and hyperlipidemia, and chronic periodontitis is an independent risk factor for hyperlipidemia, especially for TC and TG in serum.
Chesworth, Bert M.
2013-01-01
Background The original 20-item Upper Extremity Functional Index (UEFI) has not undergone Rasch validation. Objective The purpose of this study was to determine whether Rasch analysis supports the UEFI as a measure of a single construct (ie, upper extremity function) and whether a Rasch-validated UEFI has adequate reproducibility for individual-level patient evaluation. Design This was a secondary analysis of data from a repeated-measures study designed to evaluate the measurement properties of the UEFI over a 3-week period. Methods Patients (n=239) with musculoskeletal upper extremity disorders were recruited from 17 physical therapy clinics across 4 Canadian provinces. Rasch analysis of the UEFI measurement properties was performed. If the UEFI did not fit the Rasch model, misfitting patients were deleted, items with poor response structure were corrected, and misfitting items and redundant items were deleted. The impact of differential item functioning on the ability estimate of patients was investigated. Results A 15-item modified UEFI was derived to achieve fit to the Rasch model where the total score was supported as a measure of upper extremity function only. The resultant UEFI-15 interval-level scale (0–100, worst to best state) demonstrated excellent internal consistency (person separation index=0.94) and test-retest reliability (intraclass correlation coefficient [2,1]=.95). The minimal detectable change at the 90% confidence interval was 8.1. Limitations Patients who were ambidextrous or bilaterally affected were excluded to allow for the analysis of differential item functioning due to limb involvement and arm dominance. Conclusion Rasch analysis did not support the validity of the 20-item UEFI. However, the UEFI-15 was a valid and reliable interval-level measure of a single dimension: upper extremity function. Rasch analysis supports using the UEFI-15 in physical therapist practice to quantify upper extremity function in patients with musculoskeletal disorders of the upper extremity. PMID:23813086
Yang, Jun-Ho; Yoh, Jack J
2018-01-01
A novel technique is reported for separating overlapping latent fingerprints using chemometric approaches that combine laser-induced breakdown spectroscopy (LIBS) and multivariate analysis. The LIBS technique provides the capability of real time analysis and high frequency scanning as well as the data regarding the chemical composition of overlapping latent fingerprints. These spectra offer valuable information for the classification and reconstruction of overlapping latent fingerprints by implementing appropriate statistical multivariate analysis. The current study employs principal component analysis and partial least square methods for the classification of latent fingerprints from the LIBS spectra. This technique was successfully demonstrated through a classification study of four distinct latent fingerprints using classification methods such as soft independent modeling of class analogy (SIMCA) and partial least squares discriminant analysis (PLS-DA). The novel method yielded an accuracy of more than 85% and was proven to be sufficiently robust. Furthermore, through laser scanning analysis at a spatial interval of 125 µm, the overlapping fingerprints were reconstructed as separate two-dimensional forms.
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
A rasch analysis of the Manchester foot pain and disability index
Muller, Sara; Roddy, Edward
2009-01-01
Background There is currently no interval-level measure of foot-related disability and this has hampered research in this area. The Manchester Foot Pain and Disability Index (FPDI) could potentially fill this gap. Objective To assess the fit of the three subscales (function, pain, appearance) of the FPDI to the Rasch unidimensional measurement model in order to form interval-level scores. Methods A two-stage postal survey at a general practice in the UK collected data from 149 adults aged 50 years and over with foot pain. The 17 FPDI items, in three subscales, were assessed for their fit to the Rasch model. Checks were carried out for differential item functioning by age and gender. Results The function and pain items fit the Rasch model and interval-level scores can be constructed. There were too few people without extreme scores on the appearance subscale to allow fit to the Rasch model to be tested. Conclusion The items from the FPDI function and pain subscales can be used to obtain interval level scores for these factors for use in future research studies in older adults. Further work is needed to establish the interval nature of these subscale scores in more diverse populations and to establish the measurement properties of these interval-level scores. PMID:19878536
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their combinations; secondly, it has capability in addressing the temporal variations of the functional intervals; thirdly, it can facilitate dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period and multi-option context. Copyright 2009 Elsevier Ltd. All rights reserved.
Pateras, Konstantinos; Nikolakopoulos, Stavros; Mavridis, Dimitris; Roes, Kit C B
2018-03-01
When a meta-analysis consists of a few small trials that report zero events, accounting for heterogeneity in the (interval) estimation of the overall effect is challenging. Typically, we predefine meta-analytical methods to be employed. In practice, data poses restrictions that lead to deviations from the pre-planned analysis, such as the presence of zero events in at least one study arm. We aim to explore heterogeneity estimators behaviour in estimating the overall effect across different levels of sparsity of events. We performed a simulation study that consists of two evaluations. We considered an overall comparison of estimators unconditional on the number of observed zero cells and an additional one by conditioning on the number of observed zero cells. Estimators that performed modestly robust when (interval) estimating the overall treatment effect across a range of heterogeneity assumptions were the Sidik-Jonkman, Hartung-Makambi and improved Paul-Mandel. The relative performance of estimators did not materially differ between making a predefined or data-driven choice. Our investigations confirmed that heterogeneity in such settings cannot be estimated reliably. Estimators whose performance depends strongly on the presence of heterogeneity should be avoided. The choice of estimator does not need to depend on whether or not zero cells are observed.
Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement
Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi
2015-01-01
Objectives: Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Materials and Methods: Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Results: Photographs were found to have high reliability coefficient (P > 0.05). Conclusion: The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement. PMID:26622272
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.
Characterization of a thermally imidized soluble polyimide film
NASA Technical Reports Server (NTRS)
Young, Philip R.; Davis, Judith R. J.; Chang, A. C.; Richardson, John N.
1989-01-01
A soluble aromatic poly(amic acid) film was converted to a soluble polyimide by staging at 25 deg intervals to 325 C and characterized at each interval by several analytical methods. The behavior observed was consistent with an interpretation that a reduction occurred in molecular weight of the poly(amic acid) during the initial stages of cure before the ultimate molecular weight was achieved as a polyimide. This interpretation was supported by the results of solution viscosity, gel permeation chromatography, low angle laser light scattering photometry and infrared spectroscopy analysis. The results serve to increase the fundamental understanding of how polyimides are thermally formed from poly(amic acids).
Anselmi, Nicola; Salucci, Marco; Rocca, Paolo; Massa, Andrea
2016-01-01
The sensitivity to both calibration errors and mutual coupling effects of the power pattern radiated by a linear array is addressed. Starting from the knowledge of the nominal excitations of the array elements and the maximum uncertainty on their amplitudes, the bounds of the pattern deviations from the ideal one are analytically derived by exploiting the Circular Interval Analysis (CIA). A set of representative numerical results is reported and discussed to assess the effectiveness and the reliability of the proposed approach also in comparison with state-of-the-art methods and full-wave simulations. PMID:27258274
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Lui, Kung-Jong; Chang, Kuang-Chao
2016-10-01
When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.
The use of the DInSAR method in the monitoring of road damage caused by mining activities
NASA Astrophysics Data System (ADS)
Murdzek, Radosław; Malik, Hubert; Leśniak, Andrzej
2018-04-01
This paper reviews existing remote sensing methods of road damage detection and demonstrates the possibility of using DInSAR (Differential Interferometry SAR) method to identify endangered road sections. In this study two radar images collected by Sentinel-1 satellite have been used. Images were acquired with 24 days interval in 2015. The analysis allowed to estimate the scale of the post-mining deformation that occurred in Upper Silesia and to indicate areas where road infrastructure is particularly vulnerable to damage.
Multifractal analysis of mobile social networks
NASA Astrophysics Data System (ADS)
Zheng, Wei; Zhang, Zifeng; Deng, Yufan
2017-09-01
As Wireless Fidelity (Wi-Fi)-enabled handheld devices have been widely used, the mobile social networks (MSNs) has been attracting extensive attention. Fractal approaches have also been widely applied to characterierize natural networks as useful tools to depict their spatial distribution and scaling properties. Moreover, when the complexity of the spatial distribution of MSNs cannot be properly charaterized by single fractal dimension, multifractal analysis is required. For further research, we introduced a multifractal analysis method based on box-covering algorithm to describe the structure of MSNs. Using this method, we find that the networks are multifractal at different time interval. The simulation results demonstrate that the proposed method is efficient for analyzing the multifractal characteristic of MSNs, which provides a distribution of singularities adequately describing both the heterogeneity of fractal patterns and the statistics of measurements across spatial scales in MSNs.
Determining association constants from titration experiments in supramolecular chemistry.
Thordarson, Pall
2011-03-01
The most common approach for quantifying interactions in supramolecular chemistry is a titration of the guest to solution of the host, noting the changes in some physical property through NMR, UV-Vis, fluorescence or other techniques. Despite the apparent simplicity of this approach, there are several issues that need to be carefully addressed to ensure that the final results are reliable. This includes the use of non-linear rather than linear regression methods, careful choice of stoichiometric binding model, the choice of method (e.g., NMR vs. UV-Vis) and concentration of host, the application of advanced data analysis methods such as global analysis and finally the estimation of uncertainties and confidence intervals for the results obtained. This tutorial review will give a systematic overview of all these issues-highlighting some of the key messages herein with simulated data analysis examples.
Advances in the meta-analysis of heterogeneous clinical trials II: The quality effects model.
Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M
2015-11-01
This article examines the performance of the updated quality effects (QE) estimator for meta-analysis of heterogeneous studies. It is shown that this approach leads to a decreased mean squared error (MSE) of the estimator while maintaining the nominal level of coverage probability of the confidence interval. Extensive simulation studies confirm that this approach leads to the maintenance of the correct coverage probability of the confidence interval, regardless of the level of heterogeneity, as well as a lower observed variance compared to the random effects (RE) model. The QE model is robust to subjectivity in quality assessment down to completely random entry, in which case its MSE equals that of the RE estimator. When the proposed QE method is applied to a meta-analysis of magnesium for myocardial infarction data, the pooled mortality odds ratio (OR) becomes 0.81 (95% CI 0.61-1.08) which favors the larger studies but also reflects the increased uncertainty around the pooled estimate. In comparison, under the RE model, the pooled mortality OR is 0.71 (95% CI 0.57-0.89) which is less conservative than that of the QE results. The new estimation method has been implemented into the free meta-analysis software MetaXL which allows comparison of alternative estimators and can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.
Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J.
1995-04-01
This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients.
Vascular Disease, ESRD, and Death: Interpreting Competing Risk Analyses
Coresh, Josef; Segev, Dorry L.; Kucirka, Lauren M.; Tighiouart, Hocine; Sarnak, Mark J.
2012-01-01
Summary Background and objectives Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. Design, setting, participants, & measurements This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989–1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. Results The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20–2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15–2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. Conclusions When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors. PMID:22859747
Vascular disease, ESRD, and death: interpreting competing risk analyses.
Grams, Morgan E; Coresh, Josef; Segev, Dorry L; Kucirka, Lauren M; Tighiouart, Hocine; Sarnak, Mark J
2012-10-01
Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989-1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20-2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15-2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors.
NASA Astrophysics Data System (ADS)
Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia
2018-01-01
Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.
Cieslak, Wendy; Pap, Kathleen; Bunch, Dustin R; Reineks, Edmunds; Jackson, Raymond; Steinle, Roxanne; Wang, Sihe
2013-02-01
Chromium (Cr), a trace metal element, is implicated in diabetes and cardiovascular disease. A hypochromic state has been associated with poor blood glucose control and unfavorable lipid metabolism. Sensitive and accurate measurement of blood chromium is very important to assess the chromium nutritional status. However, interferents in biological matrices and contamination make the sensitive analysis challenging. The primary goal of this study was to develop a highly sensitive method for quantification of total Cr in whole blood by inductively coupled plasma mass spectrometry (ICP-MS) and to validate the reference interval in a local healthy population. This method was developed on an ICP-MS with a collision/reaction cell. Interference was minimized using both kinetic energy discrimination between the quadrupole and hexapole and a selective collision gas (helium). Reference interval was validated in whole blood samples (n=51) collected in trace element free EDTA tubes from healthy adults (12 males, 39 females), aged 19-64 years (38.8±12.6), after a minimum of 8 h fasting. Blood samples were aliquoted into cryogenic vials and stored at -70 °C until analysis. The assay linearity was 3.42 to 1446.59 nmol/L with an accuracy of 87.7 to 99.8%. The high sensitivity was achieved by minimization of interference through selective kinetic energy discrimination and selective collision using helium. The reference interval for total Cr using a non-parametric method was verified to be 3.92 to 7.48 nmol/L. This validated ICP-MS methodology is highly sensitive and selective for measuring total Cr in whole blood. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved. Published by Elsevier Inc. All rights reserved.
Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia
2018-01-15
Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12mgkg -1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (ww -1 ). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59mgkg -1 , REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
A Bayesian bird's eye view of ‘Replications of important results in social psychology’
Schönbrodt, Felix D.; Yao, Yuling; Gelman, Andrew; Wagenmakers, Eric-Jan
2017-01-01
We applied three Bayesian methods to reanalyse the preregistered contributions to the Social Psychology special issue ‘Replications of Important Results in Social Psychology’ (Nosek & Lakens. 2014 Registered reports: a method to increase the credibility of published results. Soc. Psychol. 45, 137–141. (doi:10.1027/1864-9335/a000192)). First, individual-experiment Bayesian parameter estimation revealed that for directed effect size measures, only three out of 44 central 95% credible intervals did not overlap with zero and fell in the expected direction. For undirected effect size measures, only four out of 59 credible intervals contained values greater than 0.10 (10% of variance explained) and only 19 intervals contained values larger than 0.05. Second, a Bayesian random-effects meta-analysis for all 38 t-tests showed that only one out of the 38 hierarchically estimated credible intervals did not overlap with zero and fell in the expected direction. Third, a Bayes factor hypothesis test was used to quantify the evidence for the null hypothesis against a default one-sided alternative. Only seven out of 60 Bayes factors indicated non-anecdotal support in favour of the alternative hypothesis (BF10>3), whereas 51 Bayes factors indicated at least some support for the null hypothesis. We hope that future analyses of replication success will embrace a more inclusive statistical approach by adopting a wider range of complementary techniques. PMID:28280547
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi
2016-10-01
Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
Timing of Radiotherapy and Outcome in Patients Receiving Adjuvant Endocrine Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karlsson, Per, E-mail: per.karlsson@oncology.gu.s; Cole, Bernard F.; International Breast Cancer Study Group Statistical Center, Department of Biostatistics and Computational Biology, Dana-Farber Cancer Institute, Boston, MA
2011-06-01
Purpose: To evaluate the association between the interval from breast-conserving surgery (BCS) to radiotherapy (RT) and the clinical outcome among patients treated with adjuvant endocrine therapy. Patients and Methods: Patient information was obtained from three International Breast Cancer Study Group trials. The analysis was restricted to 964 patients treated with BCS and adjuvant endocrine therapy. The patients were divided into two groups according to the median number of days between BCS and RT and into four groups according to the quartile of time between BCS and RT. The endpoints were the interval to local recurrence, disease-free survival, and overall survival.more » Proportional hazards regression analysis was used to perform comparisons after adjustment for baseline factors. Results: The median interval between BCS and RT was 77 days. RT timing was significantly associated with age, menopausal status, and estrogen receptor status. After adjustment for these factors, no significant effect of a RT delay {<=}20 weeks was found. The adjusted hazard ratio for RT within 77 days vs. after 77 days was 0.94 (95% confidence interval [CI], 0.47-1.87) for the interval to local recurrence, 1.05 (95% CI, 0.82-1.34) for disease-free survival, and 1.07 (95% CI, 0.77-1.49) for overall survival. For the interval to local recurrence the adjusted hazard ratio for {<=}48, 49-77, and 78-112 days was 0.90 (95% CI, 0.34-2.37), 0.86 (95% CI, 0.33-2.25), and 0.89 (95% CI, 0.33-2.41), respectively, relative to {>=}113 days. Conclusion: A RT delay of {<=}20 weeks was significantly associated with baseline factors such as age, menopausal status, and estrogen-receptor status. After adjustment for these factors, the timing of RT was not significantly associated with the interval to local recurrence, disease-free survival, or overall survival.« less
Expansion of Microbial Forensics
Schmedes, Sarah E.; Sajantila, Antti
2016-01-01
Microbial forensics has been defined as the discipline of applying scientific methods to the analysis of evidence related to bioterrorism, biocrimes, hoaxes, or the accidental release of a biological agent or toxin for attribution purposes. Over the past 15 years, technology, particularly massively parallel sequencing, and bioinformatics advances now allow the characterization of microorganisms for a variety of human forensic applications, such as human identification, body fluid characterization, postmortem interval estimation, and biocrimes involving tracking of infectious agents. Thus, microbial forensics should be more broadly described as the discipline of applying scientific methods to the analysis of microbial evidence in criminal and civil cases for investigative purposes. PMID:26912746
Use of thermal neutron reflection method for chemical analysis of bulk samples
NASA Astrophysics Data System (ADS)
Papp, A.; Csikai, J.
2014-09-01
Microscopic, σβ, and macroscopic, Σβ, reflection cross-sections of thermal neutrons averaged over bulk samples as a function of thickness (z) are given. The σβ values are additive even for bulk samples in the z=0.5-8 cm interval and so the σβmol(z) function could be given for hydrogenous substances, including some illicit drugs, explosives and hiding materials of ~1000 cm3 dimensions. The calculated excess counts agree with the measured R(z) values. For the identification of concealed objects and chemical analysis of bulky samples, different neutron methods need to be used simultaneously.
Speckle correlation method used to measure object's in-plane velocity.
Smíd, Petr; Horváth, Pavel; Hrabovský, Miroslav
2007-06-20
We present a measurement of an object's in-plane velocity in one direction by the use of the speckle correlation method. Numerical correlations of speckle patterns recorded periodically during motion of the object under investigation give information used to evaluate the object's in-plane velocity. The proposed optical setup uses a detection plane in the image field and enables one to detect the object's velocity within the interval (10-150) microm x s(-1). Simulation analysis shows a way of controlling the measuring range. The presented theory, simulation analysis, and setup are verified through an experiment of measurement of the velocity profile of an object.
Neuronal and network computation in the brain
NASA Astrophysics Data System (ADS)
Babloyantz, A.
1999-03-01
The concepts and methods of non-linear dynamics have been a powerful tool for studying some gamow aspects of brain dynamics. In this paper we show how, from time series analysis of electroencepholograms in sick and healthy subjects, chaotic nature of brain activity could be unveiled. This finding gave rise to the concept of spatiotemporal cortical chaotic networks which in turn was the foundation for a simple brain-like device which is able to become attentive, perform pattern recognition and motion detection. A new method of time series analysis is also proposed which demonstrates for the first time the existence of neuronal code in interspike intervals of coclear cells.
Unmanned aerial vehicle-based structure from motion biomass inventory estimates
NASA Astrophysics Data System (ADS)
Bedell, Emily; Leslie, Monique; Fankhauser, Katie; Burnett, Jonathan; Wing, Michael G.; Thomas, Evan A.
2017-04-01
Riparian vegetation restoration efforts require cost-effective, accurate, and replicable impact assessments. We present a method to use an unmanned aerial vehicle (UAV) equipped with a GoPro digital camera to collect photogrammetric data of a 0.8-ha riparian restoration. A three-dimensional point cloud was created from the photos using "structure from motion" techniques. The point cloud was analyzed and compared to traditional, ground-based monitoring techniques. Ground-truth data were collected on 6.3% of the study site and averaged across the entire site to report stem heights in stems/ha in three height classes. The project site was divided into four analysis sections, one for derivation of parameters used in the UAV data analysis and the remaining three sections reserved for method validation. Comparing the ground-truth data to the UAV generated data produced an overall error of 21.6% and indicated an R2 value of 0.98. A Bland-Altman analysis indicated a 95% probability that the UAV stems/section result will be within 61 stems/section of the ground-truth data. The ground-truth data are reported with an 80% confidence interval of ±1032 stems/ha thus, the UAV was able to estimate stems well within this confidence interval.
Properties of Asymmetric Detrended Fluctuation Analysis in the time series of RR intervals
NASA Astrophysics Data System (ADS)
Piskorski, J.; Kosmider, M.; Mieszkowski, D.; Krauze, T.; Wykretowicz, A.; Guzik, P.
2018-02-01
Heart rate asymmetry is a phenomenon by which the accelerations and decelerations of heart rate behave differently, and this difference is consistent and unidirectional, i.e. in most of the analyzed recordings the inequalities have the same directions. So far, it has been established for variance and runs based types of descriptors of RR intervals time series. In this paper we apply the newly developed method of Asymmetric Detrended Fluctuation Analysis, which so far has mainly been used with economic time series, to the set of 420 stationary 30 min time series of RR intervals from young, healthy individuals aged between 20 and 40. This asymmetric approach introduces separate scaling exponents for rising and falling trends. We systematically study the presence of asymmetry in both global and local versions of this method. In this study global means "applying to the whole time series" and local means "applying to windows jumping along the recording". It is found that the correlation structure of the fluctuations left over after detrending in physiological time series shows strong asymmetric features in both magnitude, with α+ <α-, where α+ is related to heart rate decelerations and α- to heart rate accelerations, and the proportion of the signal in which the above inequality holds. A very similar effect is observed if asymmetric noise is added to a symmetric self-affine function. No such phenomena are observed in the same physiological data after shuffling or with a group of symmetric synthetic time series.
Jithesh, C.; Venkataramana, V.; Penumatsa, Narendravarma; Reddy, S. N.; Poornima, K. Y.; Rajasigamani, K.
2015-01-01
Objectives: To determine and compare the potential difference of nickel release from three different orthodontic brackets, in different artificial pH, in different time intervals. Materials and Methods: Twenty-seven samples of three different orthodontic brackets were selected and grouped as 1, 2, and 3. Each group was divided into three subgroups depending on the type of orthodontic brackets, salivary pH and the time interval. The Nickel release from each subgroup were analyzed by using inductively coupled plasma-Atomic Emission Spectrophotometer (Perkin Elmer, Optima 2100 DV, USA) model. Quantitative analysis of nickel was performed three times, and the mean value was used as result. ANOVA (F-test) was used to test the significant difference among the groups at 0.05 level of significance (P < 0.05). The descriptive method of statistics was used to calculate the mean, standard deviation, minimum and maximum. SPSS 18 software ((SPSS.Ltd, Quarry bay, Hong Kong, PASW-statistics 18) was used to analyze the study. Result: The analysis shows a significant difference between three groups. The study shows that the nickel releases from the recycled stainless steel brackets have the highest at all 4.2 pH except in 120 h. Conclusion: The study result shows that the nickel release from the recycled stainless steel brackets is highest. Metal slot ceramic bracket release significantly less nickel. So, recycled stainless steel brackets should not be used for nickel allergic patients. Metal slot ceramic brackets are advisable. PMID:26538924
Bae, Jong-Myon
2016-01-01
A common method for conducting a quantitative systematic review (QSR) for observational studies related to nutritional epidemiology is the "highest versus lowest intake" method (HLM), in which only the information concerning the effect size (ES) of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM), a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES) between the HLM and ICM. A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.
Ryu, Ehri; Cheong, Jeewon
2017-01-01
In this article, we evaluated the performance of statistical methods in single-group and multi-group analysis approaches for testing group difference in indirect effects and for testing simple indirect effects in each group. We also investigated whether the performance of the methods in the single-group approach was affected when the assumption of equal variance was not satisfied. The assumption was critical for the performance of the two methods in the single-group analysis: the method using a product term for testing the group difference in a single path coefficient, and the Wald test for testing the group difference in the indirect effect. Bootstrap confidence intervals in the single-group approach and all methods in the multi-group approach were not affected by the violation of the assumption. We compared the performance of the methods and provided recommendations. PMID:28553248
Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.
Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L
2008-06-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.
Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures
Peng, Chung-Kang; Goldberger, Ary L.
2016-01-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.
Lo, Y C; Armbruster, David A
2012-04-01
Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.
Rahman, Nafisur; Kashif, Mohammad
2010-03-01
Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.
Rock classification based on resistivity patterns in electrical borehole wall images
NASA Astrophysics Data System (ADS)
Linek, Margarete; Jungmann, Matthias; Berlage, Thomas; Pechnig, Renate; Clauser, Christoph
2007-06-01
Electrical borehole wall images represent grey-level-coded micro-resistivity measurements at the borehole wall. Different scientific methods have been implemented to transform image data into quantitative log curves. We introduce a pattern recognition technique applying texture analysis, which uses second-order statistics based on studying the occurrence of pixel pairs. We calculate so-called Haralick texture features such as contrast, energy, entropy and homogeneity. The supervised classification method is used for assigning characteristic texture features to different rock classes and assessing the discriminative power of these image features. We use classifiers obtained from training intervals to characterize the entire image data set recovered in ODP hole 1203A. This yields a synthetic lithology profile based on computed texture data. We show that Haralick features accurately classify 89.9% of the training intervals. We obtained misclassification for vesicular basaltic rocks. Hence, further image analysis tools are used to improve the classification reliability. We decompose the 2D image signal by the application of wavelet transformation in order to enhance image objects horizontally, diagonally and vertically. The resulting filtered images are used for further texture analysis. This combined classification based on Haralick features and wavelet transformation improved our classification up to a level of 98%. The application of wavelet transformation increases the consistency between standard logging profiles and texture-derived lithology. Texture analysis of borehole wall images offers the potential to facilitate objective analysis of multiple boreholes with the same lithology.
A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes
ERIC Educational Resources Information Center
Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.
2008-01-01
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…
Marinaccio, Christian; Giudice, Giuseppe; Nacchiero, Eleonora; Robusto, Fabio; Opinto, Giuseppina; Lastilla, Gaetano; Maiorano, Eugenio; Ribatti, Domenico
2016-08-01
The presence of interval sentinel lymph nodes in melanoma is documented in several studies, but controversies still exist about the management of these lymph nodes. In this study, an immunohistochemical evaluation of tumor cell proliferation and neo-angiogenesis has been performed with the aim of establishing a correlation between these two parameters between positive and negative interval sentinel lymph nodes. This retrospective study reviewed data of 23 patients diagnosed with melanoma. Bioptic specimens of interval sentinel lymph node were retrieved, and immunohistochemical reactions on tissue sections were performed using Ki67 as a marker of proliferation and CD31 as a blood vessel marker for the study of angiogenesis. The entire stained tissue sections for each case were digitized using Aperio Scanscope Cs whole-slide scanning platform and stored as high-resolution images. Image analysis was carried out on three selected fields of equal area using IHC Nuclear and Microvessel analysis algorithms to determine positive Ki67 nuclei and vessel number. Patients were divided into positive and negative interval sentinel lymph node groups, and the positive interval sentinel lymph node group was further divided into interval positive with micrometastasis and interval positive with macrometastasis subgroups. The analysis revealed a significant difference between positive and negative interval sentinel lymph nodes in the percentage of Ki67-positive nuclei and mean vessel number suggestive of an increased cellular proliferation and angiogenesis in positive interval sentinel lymph nodes. Further analysis in the interval positive lymph node group showed a significant difference between micro- and macrometastasis subgroups in the percentage of Ki67-positive nuclei and mean vessel number. Percentage of Ki67-positive nuclei was increased in the macrometastasis subgroup, while mean vessel number was increased in the micrometastasis subgroup. The results of this study suggest that the correlation between tumor cell proliferation and neo-angiogenesis in interval sentinel lymph nodes in melanoma could be used as a good predictive marker to distinguish interval positive sentinel lymph nodes with micrometastasis from interval positive lymph nodes with macrometastasis subgroups.
Overgaard, Martin; Pedersen, Susanne Møller
2017-10-26
Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
NASA Astrophysics Data System (ADS)
Shen, Wei; Li, Dongsheng; Zhang, Shuaifang; Ou, Jinping
2017-07-01
This paper presents a hybrid method that combines the B-spline wavelet on the interval (BSWI) finite element method and spectral analysis based on fast Fourier transform (FFT) to study wave propagation in One-Dimensional (1D) structures. BSWI scaling functions are utilized to approximate the theoretical wave solution in the spatial domain and construct a high-accuracy dynamic stiffness matrix. Dynamic reduction on element level is applied to eliminate the interior degrees of freedom of BSWI elements and substantially reduce the size of the system matrix. The dynamic equations of the system are then transformed and solved in the frequency domain through FFT-based spectral analysis which is especially suitable for parallel computation. A comparative analysis of four different finite element methods is conducted to demonstrate the validity and efficiency of the proposed method when utilized in high-frequency wave problems. Other numerical examples are utilized to simulate the influence of crack and delamination on wave propagation in 1D rods and beams. Finally, the errors caused by FFT and their corresponding solutions are presented.
Integrating Behavioral Health in Primary Care Using Lean Workflow Analysis: A Case Study.
van Eeghen, Constance; Littenberg, Benjamin; Holman, Melissa D; Kessler, Rodger
2016-01-01
Primary care offices are integrating behavioral health (BH) clinicians into their practices. Implementing such a change is complex, difficult, and time consuming. Lean workflow analysis may be an efficient, effective, and acceptable method for use during integration. The objectives of this study were to observe BH integration into primary care and to measure its impact. This was a prospective, mixed-methods case study in a primary care practice that served 8,426 patients over a 17-month period, with 652 patients referred to BH services. Secondary measures included primary care visits resulting in BH referrals, referrals resulting in scheduled appointments, time from referral to the scheduled appointment, and time from the referral to the first visit. Providers and staff were surveyed on the Lean method. Referrals increased from 23 to 37 per 1000 visits (P < .001). Referrals resulted in more scheduled (60% to 74%; P < .001) and arrived visits (44% to 53%; P = .025). Time from referral to the first scheduled visit decreased (hazard ratio, 1.60; 95% confidence interval, 1.37-1.88) as did time to first arrived visit (hazard ratio, 1.36; 95% confidence interval, 1.14-1.62). Survey responses and comments were positive. This pilot integration of BH showed significant improvements in treatment initiation and other measures. Strengths of Lean analysis included workflow improvement, system perspective, and project success. Further evaluation is indicated. © Copyright 2016 by the American Board of Family Medicine.
Equivalent statistics and data interpretation.
Francis, Gregory
2017-08-01
Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.
Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.
Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo
2017-12-01
The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.
NASA Astrophysics Data System (ADS)
Liu, Lisheng; Zhang, Heyong; Guo, Jin; Zhao, Shuai; Wang, Tingfeng
2012-08-01
In this paper, we report a mathematical derivation of probability density function (PDF) of time-interval between two successive photoelectrons of the laser heterodyne signal, and give a confirmation of the theoretical result by both numerical simulation and an experiment. The PDF curve of the beat signal displays a series of fluctuations, the period and amplitude of which are respectively determined by the beat frequency and the mixing efficiency. The beat frequency is derived from the frequency of fluctuations accordingly when the PDF curve is measured. This frequency measurement method still works while the traditional Fast Fourier Transform (FFT) algorithm hardly derives the correct peak value of the beat frequency in the condition that we detect 80 MHz beat signal with 8 Mcps (counts per-second) photons count rate, and this indicates an advantage of the PDF method.
Cardiopulmonary resuscitation quality: Widespread variation in data intervals used for analysis.
Talikowska, Milena; Tohira, Hideo; Bailey, Paul; Finn, Judith
2016-05-01
There is a growing body of evidence for the relationship between CPR quality and survival in cardiac arrest patients. We sought to describe the characteristics of the analysis intervals used across studies. Relevant papers were selected as described in our recent systematic review. From these papers we collected information about (1) the time interval used for analysis; (2) the event that marked the beginning of the analysis interval; and (3) the minimum amount of CPR quality data required for a case to be included in the analysed cohort. We then compared this data across papers. Twenty-one studies reported on the association between CPR quality and cardiac arrest patient survival. In two thirds of studies data from the start of the resuscitation episode was analysed, in particular the first 5min. Commencement of the analysis interval was marked by various events including ECG pad placement and first chest compression. Nine studies specified a minimum amount of data that had to have been collected for the individual case to be included in the analysis; most commonly 1min of data. The use of shorter intervals allowed for inclusion of more cases as it included cases that did not have a complete dataset. To facilitate comparisons across studies, a standardised definition of the data analysis interval should be developed; one that maximises the amount of cases available without compromising the data's representability of the resuscitation effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Perry, Charles A.
2008-01-01
Precipitation-frequency and discharge-frequency relations for small drainage basins with areas less than 32 square miles in Kansas were evaluated to reduce the uncertainty of discharge-frequency estimates. Gaged-discharge records were used to develop discharge-frequency equations for the ratio of discharge to drainage area (Q/A) values using data from basins with variable soil permeability, channel slope, and mean annual precipitation. Soil permeability and mean annual precipitation are the dominant basin characteristics in the multiple linear regression analyses. In addition, 28 discharge measurements at ungaged sites by indirect surveying methods and by velocity meters also were used in this analysis to relate precipitation-recurrence interval to discharge-recurrence interval. Precipitation-recurrence interval for each of these discharge measurements were estimated from weather-radar estimates of precipitation and from nearby raingages. Time of concentration for each basin for each of the ungaged sites was computed and used to determine the precipitation-recurrence interval based on precipitation depth and duration. The ratio of discharge/drainage area (Q/A) value for each event was then assigned to that precipitation-recurrence interval. The relation between the ratio of discharge/drainage area (Q/A) and precipitation-recurrence interval for all 28 measured events resulted in a correlation coefficient of 0.79. Using basins less than 5.4 mi2 only, the correlation decreases to 0.74. However, when basins greater than 5.4 and less than 32 mi2 are examined the relation improves to a correlation coefficient of 0.95. There were a sufficient number of discharge and radar-measured precipitation events for both the 5-year (8 events) and the 100-year (11 events) recurrence intervals to examine the effect of basin characteristics on the Q/A values for basins less than 32 mi2. At the 5-year precipitation-/discharge-recurrence interval, channel slope was a significant predictor (r=0.99) of Q/A. Permeability (r=0.68) also had a significant effect on Q/A values for the 5-year recurrence interval. At the 100-year recurrence interval, permeability, channel slope, and mean annual precipitation did not have a significant effect on Q/A; however, time of concentration was a significant factor in determining Q/A for the 100-year events with greater times of concentration resulting in lower Q/A values. Additional high-recurrence interval (5-, 10-, 25-, 50-, and 100-year) precipitation/discharge data are needed to confirm these relations suggested above. Discharge data with attendant basin-wide precipitation data from precipitation-radar estimates provides a unique opportunity to study the effects of basin characteristics on the relation between precipitation recurrence interval and discharge-recurrence interval. Discharge-frequency values from the Q/A equations, the rational method, and the Kansas discharge-frequency equations (KFFE) were compared to 28 measured weather-radar precipitation-/discharge-frequency values. The association between precipitation frequency from weather-radar estimates and the frequency of the resulting discharge was shown in these comparisons. The measured and Q/A equation computed discharges displayed the best equality from low to high discharges of the three methods. Here the slope of the line was nearly 1:1 (y=0.9844x0.9677). Comparisons with the rational method produced a slope greater than 1:1 (y=0.0722x1.235), and the KFFE equations produced a slope less than 1:1 (y=5.9103x0.7475). The Q/A equation standard error of prediction averaged 0.1346 log units for the 5.4-to 32-square-mile group and 0.0944 log units for the less than 5.4-square mile group. The KFFE standard error averaged 0.2107 log units for the less-than-30-square-mile equations. Using the Q/A equations for determining discharge frequency values for ungaged sites thus appears to be a good alternative to the other two methods because of this s
A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.
NASA Technical Reports Server (NTRS)
Harris, J. D.
1971-01-01
The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.
Li, Yongping; Huang, Guohe
2009-03-01
In this study, a dynamic analysis approach based on an inexact multistage integer programming (IMIP) model is developed for supporting municipal solid waste (MSW) management under uncertainty. Techniques of interval-parameter programming and multistage stochastic programming are incorporated within an integer-programming framework. The developed IMIP can deal with uncertainties expressed as probability distributions and interval numbers, and can reflect the dynamics in terms of decisions for waste-flow allocation and facility-capacity expansion over a multistage context. Moreover, the IMIP can be used for analyzing various policy scenarios that are associated with different levels of economic consequences. The developed method is applied to a case study of long-term waste-management planning. The results indicate that reasonable solutions have been generated for binary and continuous variables. They can help generate desired decisions of system-capacity expansion and waste-flow allocation with a minimized system cost and maximized system reliability.
Estimation of treatment effect in a subpopulation: An empirical Bayes approach.
Shen, Changyu; Li, Xiaochun; Jeong, Jaesik
2016-01-01
It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.
Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel
2011-02-20
A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.
Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R
2011-04-15
In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.
2010-05-01
Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.
Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B
2006-08-01
Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.
Deep Learning for Classification of Colorectal Polyps on Whole-slide Images.
Korbar, Bruno; Olofson, Andrea M; Miraflor, Allen P; Nicka, Catherine M; Suriawinata, Matthew A; Torresani, Lorenzo; Suriawinata, Arief A; Hassanpour, Saeed
2017-01-01
Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%-95.9%). Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations.
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-06
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
Highly comparative time-series analysis: the empirical structure of time series and their methods
Fulcher, Ben D.; Little, Max A.; Jones, Nick S.
2013-01-01
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344
Frequency analysis via the method of moment functionals
NASA Technical Reports Server (NTRS)
Pearson, A. E.; Pan, J. Q.
1990-01-01
Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.
The use of rational functions in numerical quadrature
NASA Astrophysics Data System (ADS)
Gautschi, Walter
2001-08-01
Quadrature problems involving functions that have poles outside the interval of integration can profitably be solved by methods that are exact not only for polynomials of appropriate degree, but also for rational functions having the same (or the most important) poles as the function to be integrated. Constructive and computational tools for accomplishing this are described and illustrated in a number of quadrature contexts. The superiority of such rational/polynomial methods is shown by an analysis of the remainder term and documented by numerical examples.
Relaxation Estimation of RMSD in Molecular Dynamics Immunosimulations
Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena
2012-01-01
Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of “lagged RMSD-analysis” as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged. PMID:23019425
Araki, Tooru; Kodera, Aya; Kitada, Kunimi; Fujiwara, Michimasa; Muraoka, Michiko; Abe, Yoshiko; Ikeda, Masanori; Tsukahara, Hirokazu
2018-04-01
Objective The present study was performed to identify factors associated with a Bacille Calmette-Guérin (BCG) inoculation site change in patients with Kawasaki disease (KD). Methods Among patients who had received BCG vaccination and treatment for KD at our hospital from 2005 through 2016, 177 patients born in 2005 through 2016 were enrolled. The patients were divided into those with (n = 83, change group) and without (n = 94, no-change group) a BCG site change, and the patient demographics, clinical severity, blood examination results, and echocardiographic findings were compared between the two groups. Results The change group was younger at onset and had a shorter interval from vaccination to onset. A BCG site change was observed in patients who developed the onset of KD symptoms from 31 to 806 days after BCG vaccination. Multivariate analysis showed that the interval from vaccination was closely and positively associated with the BCG site change (hazard ratio = 0.995, 95% confidence interval = 0.993-0.997). Conclusion A BCG site change in patients with KD is most closely associated with the interval from BCG vaccination to onset.
Response of Autonomic Nervous System to Body Positions:
NASA Astrophysics Data System (ADS)
Xu, Aiguo; Gonnella, G.; Federici, A.; Stramaglia, S.; Simone, F.; Zenzola, A.; Santostasi, R.
Two mathematical methods, the Fourier and wavelet transforms, were used to study the short term cardiovascular control system. Time series, picked from electrocardiogram and arterial blood pressure lasting 6 minutes, were analyzed in supine position (SUP), during the first (HD1) and the second parts (HD2) of 90° head down tilt, and during recovery (REC). The wavelet transform was performed using the Haar function of period T=2j (j=1,2,...,6) to obtain wavelet coefficients. Power spectra components were analyzed within three bands, VLF (0.003-0.04), LF (0.04-0.15) and HF (0.15-0.4) with the frequency unit cycle/interval. Wavelet transform demonstrated a higher discrimination among all analyzed periods than the Fourier transform. For the Fourier analysis, the LF of R-R intervals and VLF of systolic blood pressure show more evident difference for different body positions. For the wavelet analysis, the systolic blood pressures show much more evident differences than the R-R intervals. This study suggests a difference in the response of the vessels and the heart to different body positions. The partial dissociation between VLF and LF results is a physiologically relevant finding of this work.
Seo, Eun Hee; Kim, Tae Oh; Park, Min Jae; Joo, Hee Rin; Heo, Nae Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo
2012-03-01
Several factors influence bowel preparation quality. Recent studies have indicated that the time interval between bowel preparation and the start of colonoscopy is also important in determining bowel preparation quality. To evaluate the influence of the preparation-to-colonoscopy (PC) interval (the interval of time between the last polyethylene glycol dose ingestion and the start of the colonoscopy) on bowel preparation quality in the split-dose method for colonoscopy. Prospective observational study. University medical center. A total of 366 consecutive outpatients undergoing colonoscopy. Split-dose bowel preparation and colonoscopy. The quality of bowel preparation was assessed by using the Ottawa Bowel Preparation Scale according to the PC interval, and other factors that might influence bowel preparation quality were analyzed. Colonoscopies with a PC interval of 3 to 5 hours had the best bowel preparation quality score in the whole, right, mid, and rectosigmoid colon according to the Ottawa Bowel Preparation Scale. In multivariate analysis, the PC interval (odds ratio [OR] 1.85; 95% CI, 1.18-2.86), the amount of PEG ingested (OR 4.34; 95% CI, 1.08-16.66), and compliance with diet instructions (OR 2.22l 95% CI, 1.33-3.70) were significant contributors to satisfactory bowel preparation. Nonrandomized controlled, single-center trial. The optimal time interval between the last dose of the agent and the start of colonoscopy is one of the important factors to determine satisfactory bowel preparation quality in split-dose polyethylene glycol bowel preparation. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Krotkus, Arunas; Molis, Gediminas
2010-10-01
The SDA (Spectral Dynamics Analysis) - method (method of THz spectrum dynamics analysis in THz range of frequencies) is used for the detection and identification of substances with similar THz Fourier spectra (such substances are named usually as the simulants) in the two- or three-component medium. This method allows us to obtain the unique 2D THz signature of the substance - the spectrogram- and to analyze the dynamics of many spectral lines of the THz signal, passed through or reflected from substance, by one set of its integral measurements simultaneously; even measurements are made on short-term intervals (less than 20 ps). For long-term intervals (100 ps and more) the SDA method gives an opportunity to define the relaxation time for excited energy levels of molecules. This information gives new opportunity to identify the substance because the relaxation time is different for molecules of different substances. The restoration of the signal by its integral values is made on the base of SVD - Single Value Decomposition - technique. We consider three examples for PTFE mixed with small content of the L-Tartaric Acid and the Sucrose in pellets. A concentration of these substances is about 5%-10%. Our investigations show that the spectrograms and dynamics of spectral lines of THz pulse passed through the pure PTFE differ from the spectrograms of the compound medium containing PTFE and the L-Tartaric Acid or the Sucrose or both these substances together. So, it is possible to detect the presence of a small amount of the additional substances in the sample even their THz Fourier spectra are practically identical. Therefore, the SDA method can be very effective for the defense and security applications and for quality control in pharmaceutical industry. We also show that in the case of substances-simulants the use of auto- and correlation functions has much worse resolvability in a comparison with the SDA method.
Fetterhoff, Dustin; Opris, Ioan; Simpson, Sean L.; Deadwyler, Sam A.; Hampson, Robert E.; Kraft, Robert A.
2014-01-01
Background Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain–computer interfaces and nonlinear neuronal models. Results Neurons involved in memory processing (“Functional Cell Types” or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid-type 1 receptor partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain–computer interfaces. PMID:25086297
1989-01-01
intervals over a 60 minute period at flow rates of 100, 250, 500, 750, and 1,000 ml/hr. Analysis of variance showed a highly significant group effect with a...significant difference between all groups except Group 3 and Group 4. Analysis of - .riance aiso showed a highly significant flow rate effect on...as effective as the conventional method of delivering warmed fluids. Also, within the range of flow rates studied, faster flow rates tended to yield a
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
2014-01-01
Background Recurrent events data analysis is common in biomedicine. Literature review indicates that most statistical models used for such data are often based on time to the first event or consider events within a subject as independent. Even when taking into account the non-independence of recurrent events within subjects, data analyses are mostly done with continuous risk interval models, which may not be appropriate for treatments with sustained effects (e.g., drug treatments of malaria patients). Furthermore, results can be biased in cases of a confounding factor implying different risk exposure, e.g. in malaria transmission: if subjects are located at zones showing different environmental factors implying different risk exposures. Methods This work aimed to compare four different approaches by analysing recurrent malaria episodes from a clinical trial assessing the effectiveness of three malaria treatments [artesunate + amodiaquine (AS + AQ), artesunate + sulphadoxine-pyrimethamine (AS + SP) or artemether-lumefantrine (AL)], with continuous and discontinuous risk intervals: Andersen-Gill counting process (AG-CP), Prentice-Williams-Peterson counting process (PWP-CP), a shared gamma frailty model, and Generalized Estimating Equations model (GEE) using Poisson distribution. Simulations were also made to analyse the impact of the addition of a confounding factor on malaria recurrent episodes. Results Using the discontinuous interval analysis, AG-CP and Shared gamma frailty models provided similar estimations of treatment effect on malaria recurrent episodes when adjusted on age category. The patients had significant decreased risk of recurrent malaria episodes when treated with AS + AQ or AS + SP arms compared to AL arm; Relative Risks were: 0.75 (95% CI (Confidence Interval): 0.62-0.89), 0.74 (95% CI: 0.62-0.88) respectively for AG-CP model and 0.76 (95% CI: 0.64-0.89), 0.74 (95% CI: 0.62-0.87) for the Shared gamma frailty model. With both discontinuous and continuous risk intervals analysis, GEE Poisson distribution models failed to detect the effect of AS + AQ arm compared to AL arm when adjusted for age category. The discontinuous risk interval analysis was found to be the more appropriate approach. Conclusion Repeated event in infectious diseases such as malaria can be analysed with appropriate existing models that account for the correlation between multiple events within subjects with common statistical software packages, after properly setting up the data structures. PMID:25073652
Circulating tocopherols and risk of coronary artery disease: A systematic review and meta-analysis.
Li, Guangxiao; Li, Ying; Chen, Xin; Sun, Hao; Hou, Xiaowen; Shi, Jingpu
2016-05-01
Circulating level of tocopherols was supposed to be associated with risk of coronary artery disease. However, the results from previous studies remain controversial. Therefore, we conducted a meta-analysis based on observational studies to evaluate the association between circulating tocopherols and coronary artery disease risk for the first time. Meta-analysis. PubMed, Embase and Cochrane databases were searched to retrieve articles published during January 1995 and May 2015. Articles were included if they provided sufficient information to calculate the weighted mean difference and its corresponding 95% confidence interval. Circulating level of total tocopherols was significantly lower in coronary artery disease patients than that in controls (weighted mean difference -4.33 μmol/l, 95% confidence interval -6.74 to -1.91, P < 0.01). However, circulating α-tocopherol alone was not significantly associated with coronary artery disease risk. Results from subgroup analyses showed that a lower level of circulating total tocopherols was merely associated with higher coronary artery disease risk in studies with higher sex ratio in cases (<2, weighted mean difference -0.07 μmol/l, 95% confidence interval -1.15 to 1.00, P = 0.90; ≥ 2, weighted mean difference -6.00 μmol/l, 95% confidence interval -9.76 to -2.22, P < 0.01). Similarly, a lower level of circulating total tocopherols was associated with early onset coronary artery disease rather than late onset coronary artery disease (<60 years, weighted mean difference -5.40 μmol/l, 95% confidence interval -9.22 to -1.57, P < 0.01; ≥ 60 years, weighted mean difference -1.37 μmol/l, 95% confidence interval -3.48 to 0.74, P = 0.20). We also found some discrepancies in circulating total tocopherols when the studies were stratified by matching status and assay methods. Our findings suggest that a deficiency in circulating total tocopherols might be associated with higher coronary artery disease risk. Whereas circulating α-tocopherol alone could not protect us from developing coronary artery disease. Further prospective studies were warranted to confirm our findings. © The European Society of Cardiology 2015.
Safikhani, Zhaleh; Sadeghi, Mehdi; Pezeshk, Hamid; Eslahchi, Changiz
2013-01-01
Recent advances in the sequencing technologies have provided a handful of RNA-seq datasets for transcriptome analysis. However, reconstruction of full-length isoforms and estimation of the expression level of transcripts with a low cost are challenging tasks. We propose a novel de novo method named SSP that incorporates interval integer linear programming to resolve alternatively spliced isoforms and reconstruct the whole transcriptome from short reads. Experimental results show that SSP is fast and precise in determining different alternatively spliced isoforms along with the estimation of reconstructed transcript abundances. The SSP software package is available at http://www.bioinf.cs.ipm.ir/software/ssp. © 2013.
Song, Qiankun; Yu, Qinqin; Zhao, Zhenjiang; Liu, Yurong; Alsaadi, Fuad E
2018-07-01
In this paper, the boundedness and robust stability for a class of delayed complex-valued neural networks with interval parameter uncertainties are investigated. By using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks. The obtained robust stability criterion is expressed in complex-valued LMI, which can be calculated numerically using YALMIP with solver of SDPT3 in MATLAB. An example with simulations is supplied to show the applicability and advantages of the acquired result. Copyright © 2018 Elsevier Ltd. All rights reserved.
Analysis of spreadable cheese by Raman spectroscopy and chemometric tools.
Oliveira, Kamila de Sá; Callegaro, Layce de Souza; Stephani, Rodrigo; Almeida, Mariana Ramos; de Oliveira, Luiz Fernando Cappa
2016-03-01
In this work, FT-Raman spectroscopy was explored to evaluate spreadable cheese samples. A partial least squares discriminant analysis was employed to identify the spreadable cheese samples containing starch. To build the models, two types of samples were used: commercial samples and samples manufactured in local industries. The method of supervised classification PLS-DA was employed to classify the samples as adulterated or without starch. Multivariate regression was performed using the partial least squares method to quantify the starch in the spreadable cheese. The limit of detection obtained for the model was 0.34% (w/w) and the limit of quantification was 1.14% (w/w). The reliability of the models was evaluated by determining the confidence interval, which was calculated using the bootstrap re-sampling technique. The results show that the classification models can be used to complement classical analysis and as screening methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR
Mobli, Mehdi; Hoch, Jeffrey C.
2017-01-01
Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Estimation of postmortem interval through albumin in CSF by simple dye binding method.
Parmar, Ankita K; Menon, Shobhana K
2015-12-01
Estimation of postmortem interval is a very important question in some medicolegal investigations. For the precise estimation of postmortem interval, there is a need of a method which can give accurate estimation. Bromocresol green (BCG) is a simple dye binding method and widely used in routine practice. Application of this method in forensic practice may bring revolutionary changes. In this study, cerebrospinal fluid was aspirated from cisternal puncture from 100 autopsies. A study was carried out on concentration of albumin with respect to postmortem interval. After death, albumin present in CSF undergoes changes, after 72 h of death, concentration of albumin has become 0.012 mM, and this decrease was linear from 2 h to 72 h. An important relationship was found between albumin concentration and postmortem interval with an error of ± 1-4h. The study concludes that CSF albumin can be a useful and significant parameter in estimation of postmortem interval. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Clinical Validation of Heart Rate Apps: Mixed-Methods Evaluation Study
Stans, Jelle; Mortelmans, Christophe; Van Haelst, Ruth; Van Schelvergem, Gertjan; Pelckmans, Caroline; Smeets, Christophe JP; Lanssens, Dorien; De Cannière, Hélène; Storms, Valerie; Thijs, Inge M; Vaes, Bert; Vandervoort, Pieter M
2017-01-01
Background Photoplethysmography (PPG) is a proven way to measure heart rate (HR). This technology is already available in smartphones, which allows measuring HR only by using the smartphone. Given the widespread availability of smartphones, this creates a scalable way to enable mobile HR monitoring. An essential precondition is that these technologies are as reliable and accurate as the current clinical (gold) standards. At this moment, there is no consensus on a gold standard method for the validation of HR apps. This results in different validation processes that do not always reflect the veracious outcome of comparison. Objective The aim of this paper was to investigate and describe the necessary elements in validating and comparing HR apps versus standard technology. Methods The FibriCheck (Qompium) app was used in two separate prospective nonrandomized studies. In the first study, the HR of the FibriCheck app was consecutively compared with 2 different Food and Drug Administration (FDA)-cleared HR devices: the Nonin oximeter and the AliveCor Mobile ECG. In the second study, a next step in validation was performed by comparing the beat-to-beat intervals of the FibriCheck app to a synchronized ECG recording. Results In the first study, the HR (BPM, beats per minute) of 88 random subjects consecutively measured with the 3 devices showed a correlation coefficient of .834 between FibriCheck and Nonin, .88 between FibriCheck and AliveCor, and .897 between Nonin and AliveCor. A single way analysis of variance (ANOVA; P=.61 was executed to test the hypothesis that there were no significant differences between the HRs as measured by the 3 devices. In the second study, 20,298 (ms) R-R intervals (RRI)–peak-to-peak intervals (PPI) from 229 subjects were analyzed. This resulted in a positive correlation (rs=.993, root mean square deviation [RMSE]=23.04 ms, and normalized root mean square error [NRMSE]=0.012) between the PPI from FibriCheck and the RRI from the wearable ECG. There was no significant difference (P=.92) between these intervals. Conclusions Our findings suggest that the most suitable method for the validation of an HR app is a simultaneous measurement of the HR by the smartphone app and an ECG system, compared on the basis of beat-to-beat analysis. This approach could lead to more correct assessments of the accuracy of HR apps. PMID:28842392
Coefficient Alpha Bootstrap Confidence Interval under Nonnormality
ERIC Educational Resources Information Center
Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew
2012-01-01
Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…
Rocca, Corinne H; Thompson, Kirsten M J; Goodman, Suzan; Westhoff, Carolyn L; Harper, Cynthia C
2016-06-01
Almost one-half of women having an abortion in the United States have had a previous procedure, which highlights a failure to provide adequate preventive care. Provision of intrauterine devices and implants, which have high upfront costs, can be uniquely challenging in the abortion care setting. We conducted a study of a clinic-wide training intervention on long-acting reversible contraception and examined the effect of the intervention, insurance coverage, and funding policies on the use of long-acting contraceptives after an abortion. This subanalysis of a cluster, randomized trial examines data from the 648 patients who had undergone an abortion who were recruited from 17 reproductive health centers across the United States. The trial followed participants 18-25 years old who did not desire pregnancy for a year. We measured the effect of the intervention, health insurance, and funding policies on contraceptive outcomes, which included intrauterine device and implant counseling and selection at the abortion visit, with the use of logistic regression with generalized estimating equations for clustering. We used survival analysis to model the actual initiation of these methods over 1 year. Women who obtained abortion care at intervention sites were more likely to report intrauterine device and implant counseling (70% vs 41%; adjusted odds ratio, 3.83; 95% confidence interval, 2.37-6.19) and the selection of these methods (36% vs 21%; adjusted odds ratio, 2.11; 95% confidence interval, 1.39-3.21). However, the actual initiation of methods was similar between study arms (22/100 woman-years each; adjusted hazard ratio, 0.88; 95% confidence interval, 0.51-1.51). Health insurance and funding policies were important for the initiation of intrauterine devices and implants. Compared with uninsured women, those women with public health insurance had a far higher initiation rate (adjusted hazard ratio, 2.18; 95% confidence interval, 1.31-3.62). Women at sites that provide state Medicaid enrollees abortion coverage also had a higher initiation rate (adjusted hazard ratio, 1.73; 95% confidence interval, 1.04-2.88), as did those at sites with state mandates for private health insurance to cover contraception (adjusted hazard ratio, 1.80; 95% confidence interval, 1.06-3.07). Few of the women with private insurance used it to pay for the abortion (28%), but those who did initiated long-acting contraceptive methods at almost twice the rate as women who paid for it themselves or with donated funds (adjusted hazard ratio, 1.94; 95% confidence interval, 1.10-3.43). The clinic-wide training increased long-acting reversible contraceptive counseling and selection but did not change initiation for abortion patients. Long-acting method use after abortion was associated strongly with funding. Restrictions on the coverage of abortion and contraceptives in abortion settings prevent the initiation of desired long-acting methods. Copyright © 2015 Elsevier Inc. All rights reserved.
Graphing within-subjects confidence intervals using SPSS and S-Plus.
Wright, Daniel B
2007-02-01
Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.
Meta-Analysis of Drainage Versus No Drainage After Laparoscopic Cholecystectomy
Lucarelli, Pierino; Di Filippo, Annalisa; De Angelis, Francesco; Stipa, Francesco; Spaziani, Erasmo
2014-01-01
Background and Objectives: Routine drainage after laparoscopic cholecystectomy is still controversial. This meta-analysis was performed to assess the role of drains in reducing complications in laparoscopic cholecystectomy. Methods: An electronic search of Medline, Science Citation Index Expanded, Scopus, and the Cochrane Library database from January 1990 to June 2013 was performed to identify randomized clinical trials that compare prophylactic drainage with no drainage in laparoscopic cholecystectomy. The odds ratio for qualitative variables and standardized mean difference for continuous variables were calculated. Results: Twelve randomized controlled trials were included in the meta-analysis, involving 1939 patients randomized to a drain (960) versus no drain (979). The morbidity rate was lower in the no drain group (odds ratio, 1.97; 95% confidence interval, 1.26 to 3.10; P = .003). The wound infection rate was lower in the no drain group (odds ratio, 2.35; 95% confidence interval, 1.22 to 4.51; P = .01). Abdominal pain 24 hours after surgery was less severe in the no drain group (standardized mean difference, 2.30; 95% confidence interval, 1.27 to 3.34; P < .0001). No significant difference was present with respect to the presence and quantity of subhepatic fluid collection, shoulder tip pain, parenteral ketorolac consumption, nausea, vomiting, and hospital stay. Conclusion: This study was unable to prove that drains were useful in reducing complications in laparoscopic cholecystectomy. PMID:25516708
Pickering, Ethan M; Hossain, Mohammad A; Mousseau, Jack P; Swanson, Rachel A; French, Roger H; Abramson, Alexis R
2017-01-01
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). The utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged-Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15-minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickering, Ethan M.; Hossain, Mohammad A.; Mousseau, Jack P.
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). Themore » utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged- Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15- minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.« less
Pickering, Ethan M.; Hossain, Mohammad A.; Mousseau, Jack P.; ...
2017-10-31
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). Themore » utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged- Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15- minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.« less
Hossain, Mohammad A.; Mousseau, Jack P.; Swanson, Rachel A.; French, Roger H.; Abramson, Alexis R.
2017-01-01
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). The utility of a cross-sectional analysis of a sample set of building’s electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged-Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15-minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures. PMID:29088269
ERIC Educational Resources Information Center
Rindskopf, David
2012-01-01
Muthen and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By…
Bootstrap evaluation of a young Douglas-fir height growth model for the Pacific Northwest
Nicholas R. Vaughn; Eric C. Turnblom; Martin W. Ritchie
2010-01-01
We evaluated the stability of a complex regression model developed to predict the annual height growth of young Douglas-fir. This model is highly nonlinear and is fit in an iterative manner for annual growth coefficients from data with multiple periodic remeasurement intervals. The traditional methods for such a sensitivity analysis either involve laborious math or...
Louis. R. Iverson; Paul. G. Risser; Paul. G. Risser
1987-01-01
Geographic information systems and remote sensing techniques are powerful tools in the analysis of long-term changes in vegetation and land use, especially because spatial information from two or more time intervals can be compared more readily than by manual methods. A primary restriction is the paucity of data that has been digitized from earlier periods. The...
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
Roentgen stereophotogrammetric analysis of metal-backed hemispherical cups without attached markers.
Valstar, E R; Spoor, C W; Nelissen, R G; Rozing, P M
1997-11-01
A method for the detection of micromotion of a metal-backed hemispherical acetabular cup is presented and tested. Unlike in conventional roentgen stereophotogrammetric analysis, the cup does not have to be marked with tantalum markers; the micromotion is calculated from the contours of the hemispherical part and the base circle of the cup. In this way, two rotations (tilt and anteversion) and the translations along the three cardinal axes are obtained. In a phantom study, the maximum error in the position of the cup's centre was 0.04 mm. The mean error in the orientation of the cup was 0.41 degree, with a 95% confidence interval of 0.28-0.54 degree. The in vivo accuracy was tested by repeated measurement of 21 radiographs from seven patients. The upper bound of the 95% tolerance interval for the translations along the transversal, longitudinal, and sagittal axes was 0.09, 0.07, and 0.34 mm, respectively: for the rotation, this upper bound was 0.39 degree. These results show that the new method, in which the position and orientation of metal-backed hemispherical cup is calculated from its projected contours, is a simple and accurate alternative to attaching markers to the cup.
A comprehensive prediction and evaluation method of pilot workload.
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%.
Hashimoto, Tetsuo; Sanada, Yukihisa; Uezu, Yasuhiro
2004-05-01
A delayed coincidence method, time-interval analysis (TIA), has been applied to successive alpha- alpha decay events on the millisecond time-scale. Such decay events are part of the (220)Rn-->(216)Po ( T(1/2) 145 ms) (Th-series) and (219)Rn-->(215)Po ( T(1/2) 1.78 ms) (Ac-series). By using TIA in addition to measurement of (226)Ra (U-series) from alpha-spectrometry by liquid scintillation counting (LSC), two natural decay series could be identified and separated. The TIA detection efficiency was improved by using the pulse-shape discrimination technique (PSD) to reject beta-pulses, by solvent extraction of Ra combined with simple chemical separation, and by purging the scintillation solution with dry N(2) gas. The U- and Th-series together with the Ac-series were determined, respectively, from alpha spectra and TIA carried out immediately after Ra-extraction. Using the (221)Fr-->(217)At ( T(1/2) 32.3 ms) decay process as a tracer, overall yields were estimated from application of TIA to the (225)Ra (Np-decay series) at the time of maximum growth. The present method has proven useful for simultaneous determination of three radioactive decay series in environmental samples.
Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.
Lee, Sunbok; Lei, Man-Kit; Brody, Gene H
2015-06-01
Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single-tailed distribution. (Smaller actual P{}_{I }is no problem.) {}_{ } One advantage of this method is that this function is available in EXCEL. Note that care must be taken with the definition of the CHIINV function (the inverse of the integral chi-squared distribution). The equivalent inequality in EXCEL is μ < CHIINV[1-α, 2(n+1)] In practice, one calculates this upper limit for a specified LOC, α , and a guess of how many hits n will be found after the MC analysis. Then the estimate of the number of histories required is this upper limit divided by the specification for the allowed P{}_{I} (rounded up). However, if the number of hits actually exceeds the guess, the P{}_{I} requirement will be met only with a smaller LOC. A disadvantage is that the intervals about the mean are "in general too wide, yielding coverage probabilities much greater than 1- α ." footnote{ G. Casella and C. Robert (1988), Purdue University-Technical Report #88-7 or Cornell University-Technical Report BU-903-M.} For planetary protection, this technical issue means that the upper limit of the interval and the probability associated with the interval (i.e., the LOC) are conservative.
Zhang, Chuanbao; Guo, Wei; Huang, Hengjian; Ma, Yueyun; Zhuang, Junhua; Zhang, Jie
2013-01-01
Background Reference intervals of Liver function tests are very important for the screening, diagnosis, treatment, and monitoring of liver diseases. We aim to establish common reference intervals of liver function tests specifically for the Chinese adult population. Methods A total of 3210 individuals (20–79 years) were enrolled in six representative geographical regions in China. Analytes of ALT, AST, GGT, ALP, total protein, albumin and total bilirubin were measured using three analytical systems mainly used in China. The newly established reference intervals were based on the results of traceability or multiple systems, and then validated in 21 large hospitals located nationwide qualified by the National External Quality Assessment (EQA) of China. Results We had been established reference intervals of the seven liver function tests for the Chinese adult population and found there were apparent variances of reference values for the variables for partitioning analysis such as gender(ALT, GGT, total bilirubin), age(ALP, albumin) and region(total protein). More than 86% of the 21 laboratories passed the validation in all subgroup of reference intervals and overall about 95.3% to 98.8% of the 1220 validation results fell within the range of the new reference interval for all liver function tests. In comparison with the currently recommended reference intervals in China, the single side observed proportions of out of range of reference values from our study for most of the tests deviated significantly from the nominal 2.5% such as total bilirubin (15.2%), ALP (0.2%), albumin (0.0%). Most of reference intervals in our study were obviously different from that of other races. Conclusion These used reference intervals are no longer applicable for the current Chinese population. We have established common reference intervals of liver function tests that are defined specifically for Chinese population and can be universally used among EQA-approved laboratories located all over China. PMID:24058449
An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.
ERIC Educational Resources Information Center
Capraro, Mary Margaret
This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Reassessment of the Access Testosterone chemiluminescence assay and comparison with LC-MS method.
Dittadi, Ruggero; Matteucci, Mara; Meneghetti, Elisa; Ndreu, Rudina
2018-03-01
To reassess the imprecision and Limit of Quantitation, to evaluate the cross-reaction with dehydroepiandrosterone-sulfate (DHEAS), the accuracy toward liquid chromatography-mass spectrometry (LC-MS) and the reference interval of the Access Testosterone method, performed by DxI immunoassay platform (Beckman Coulter). Imprecision was evaluated testing six pool samples assayed in 20 different run using two reagents lots. The cross-reaction with DHEAS was studied both by a displacement curve and by spiking DHEAS standard in two serum samples with known amount of testosterone. The comparison with LC-MS was evaluated by Passing-Bablock analysis in 21 routine serum samples and 19 control samples from an External Quality Assurance (EQA) scheme. The reference interval was verified by an indirect estimation on 2445 male and 2838 female outpatients. The imprecision study showed a coefficient of variation (CV) between 2.7% and 34.7% for serum pools from 16.3 and 0.27 nmol/L. The value of Limit of Quantitation at 20% CV was 0.53 nmol/L. The DHEAS showed a cross-reaction of 0.0074%. A comparison with LC-MS showed a trend toward a slight underestimation of immunoassay vs LC-MS (Passing-Bablock equations: DxI=-0.24+0.906 LCMS in serum samples and DxI=-0.299+0.981 LCMS in EQA samples). The verification of reference interval showed a 2.5th-97.5th percentile distribution of 6.6-24.3 nmol/L for male over 14 years and <0.5-2.78 nmol/L for female subjects, in accord with the reference intervals reported by the manufacturer. The Access Testosterone method could be considered an adequately reliable tool for the testosterone measurement. © 2017 Wiley Periodicals, Inc.
Interval mapping for red/green skin color in Asian pears using a modified QTL-seq method
Xue, Huabai; Shi, Ting; Wang, Fangfang; Zhou, Huangkai; Yang, Jian; Wang, Long; Wang, Suke; Su, Yanli; Zhang, Zhen; Qiao, Yushan; Li, Xiugen
2017-01-01
Pears with red skin are attractive to consumers and provide additional health benefits. Identification of the gene(s) responsible for skin coloration can benefit cultivar selection and breeding. The use of QTL-seq, a bulked segregant analysis method, can be problematic when heterozygous parents are involved. The present study modified the QTL-seq method by introducing a |Δ(SNP-index)| parameter to improve the accuracy of mapping the red skin trait in a group of highly heterozygous Asian pears. The analyses were based on mixed DNA pools composed of 28 red-skinned and 27 green-skinned pear lines derived from a cross between the ‘Mantianhong’ and ‘Hongxiangsu’ red-skinned cultivars. The ‘Dangshansuli’ cultivar genome was used as reference for sequence alignment. An average single-nucleotide polymorphism (SNP) index was calculated using a sliding window approach (200-kb windows, 20-kb increments). Nine scaffolds within the candidate QTL interval were in the fifth linkage group from 111.9 to 177.1 cM. There was a significant linkage between the insertions/deletions and simple sequence repeat markers designed from the candidate intervals and the red/green skin (R/G) locus, which was in a 582.5-kb candidate interval that contained 81 predicted protein-coding gene models and was composed of two subintervals at the bottom of the fifth chromosome. The ZFRI 130-16, In2130-12 and In2130-16 markers located near the R/G locus could potentially be used to identify the red skin trait in Asian pear populations. This study provides new insights into the genetics controlling the red skin phenotype in this fruit. PMID:29118994
Interval mapping for red/green skin color in Asian pears using a modified QTL-seq method.
Xue, Huabai; Shi, Ting; Wang, Fangfang; Zhou, Huangkai; Yang, Jian; Wang, Long; Wang, Suke; Su, Yanli; Zhang, Zhen; Qiao, Yushan; Li, Xiugen
2017-01-01
Pears with red skin are attractive to consumers and provide additional health benefits. Identification of the gene(s) responsible for skin coloration can benefit cultivar selection and breeding. The use of QTL-seq, a bulked segregant analysis method, can be problematic when heterozygous parents are involved. The present study modified the QTL-seq method by introducing a |Δ(SNP-index)| parameter to improve the accuracy of mapping the red skin trait in a group of highly heterozygous Asian pears. The analyses were based on mixed DNA pools composed of 28 red-skinned and 27 green-skinned pear lines derived from a cross between the 'Mantianhong' and 'Hongxiangsu' red-skinned cultivars. The 'Dangshansuli' cultivar genome was used as reference for sequence alignment. An average single-nucleotide polymorphism (SNP) index was calculated using a sliding window approach (200-kb windows, 20-kb increments). Nine scaffolds within the candidate QTL interval were in the fifth linkage group from 111.9 to 177.1 cM. There was a significant linkage between the insertions/deletions and simple sequence repeat markers designed from the candidate intervals and the red/green skin (R/G) locus, which was in a 582.5-kb candidate interval that contained 81 predicted protein-coding gene models and was composed of two subintervals at the bottom of the fifth chromosome. The ZFRI 130-16, In2130-12 and In2130-16 markers located near the R/G locus could potentially be used to identify the red skin trait in Asian pear populations. This study provides new insights into the genetics controlling the red skin phenotype in this fruit.
NASA Astrophysics Data System (ADS)
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION.
Liu, F; Meerschaert, M M; McGough, R J; Zhuang, P; Liu, Q
2013-03-01
In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.
NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION
Liu, F.; Meerschaert, M.M.; McGough, R.J.; Zhuang, P.; Liu, Q.
2013-01-01
In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian. PMID:23772179
Analysing attitude data through ridit schemes.
El-rouby, M G
1994-12-02
The attitudes of individuals and populations on various issues are usually assessed through sample surveys. Responses to survey questions are then scaled and combined into a meaningful whole which defines the measured attitude. The applied scales may be of nominal, ordinal, interval, or ratio nature depending upon the degree of sophistication the researcher wants to introduce into the measurement. This paper discusses methods of analysis for categorical variables of the type used in attitude and human behavior research, and recommends adoption of ridit analysis, a technique which has been successfully applied to epidemiological, clinical investigation, laboratory, and microbiological data. The ridit methodology is described after reviewing some general attitude scaling methods and problems of analysis related to them. The ridit method is then applied to a recent study conducted to assess health care service quality in North Carolina. This technique is conceptually and computationally more simple than other conventional statistical methods, and is also distribution-free. Basic requirements and limitations on its use are indicated.
Garway-Heath, David F; Quartilho, Ana; Prah, Philip; Crabb, David P; Cheng, Qian; Zhu, Haogang
2017-08-01
To evaluate the ability of various visual field (VF) analysis methods to discriminate treatment groups in glaucoma clinical trials and establish the value of time-domain optical coherence tomography (TD OCT) imaging as an additional outcome. VFs and retinal nerve fibre layer thickness (RNFLT) measurements (acquired by TD OCT) from 373 glaucoma patients in the UK Glaucoma Treatment Study (UKGTS) at up to 11 scheduled visits over a 2 year interval formed the cohort to assess the sensitivity of progression analysis methods. Specificity was assessed in 78 glaucoma patients with up to 11 repeated VF and OCT RNFLT measurements over a 3 month interval. Growth curve models assessed the difference in VF and RNFLT rate of change between treatment groups. Incident progression was identified by 3 VF-based methods: Guided Progression Analysis (GPA), 'ANSWERS' and 'PoPLR', and one based on VFs and RNFLT: 'sANSWERS'. Sensitivity, specificity and discrimination between treatment groups were evaluated. The rate of VF change was significantly faster in the placebo, compared to active treatment, group (-0.29 vs +0.03 dB/year, P <.001); the rate of RNFLT change was not different (-1.7 vs -1.1 dB/year, P =.14). After 18 months and at 95% specificity, the sensitivity of ANSWERS and PoPLR was similar (35%); sANSWERS achieved a sensitivity of 70%. GPA, ANSWERS and PoPLR discriminated treatment groups with similar statistical significance; sANSWERS did not discriminate treatment groups. Although the VF progression-detection method including VF and RNFLT measurements is more sensitive, it does not improve discrimination between treatment arms.
Liu, J; Li, Y P; Huang, G H; Zeng, X T; Nie, S
2016-01-01
In this study, an interval-stochastic-based risk analysis (RSRA) method is developed for supporting river water quality management in a rural system under uncertainty (i.e., uncertainties exist in a number of system components as well as their interrelationships). The RSRA method is effective in risk management and policy analysis, particularly when the inputs (such as allowable pollutant discharge and pollutant discharge rate) are expressed as probability distributions and interval values. Moreover, decision-makers' attitudes towards system risk can be reflected using a restricted resource measure by controlling the variability of the recourse cost. The RSRA method is then applied to a real case of water quality management in the Heshui River Basin (a rural area of China), where chemical oxygen demand (COD), total nitrogen (TN), total phosphorus (TP), and soil loss are selected as major indicators to identify the water pollution control strategies. Results reveal that uncertainties and risk attitudes have significant effects on both pollutant discharge and system benefit. A high risk measure level can lead to a reduced system benefit; however, this reduction also corresponds to raised system reliability. Results also disclose that (a) agriculture is the dominant contributor to soil loss, TN, and TP loads, and abatement actions should be mainly carried out for paddy and dry farms; (b) livestock husbandry is the main COD discharger, and abatement measures should be mainly conducted for poultry farm; (c) fishery accounts for a high percentage of TN, TP, and COD discharges but a has low percentage of overall net benefit, and it may be beneficial to cease fishery activities in the basin. The findings can facilitate the local authority in identifying desired pollution control strategies with the tradeoff between socioeconomic development and environmental sustainability.
O'Leary, D D; Lin, D C; Hughson, R L
1999-09-01
The heart rate component of the arterial baroreflex gain (BRG) was determined with auto-regressive moving-average (ARMA) analysis during each of spontaneous (SB) and random breathing (RB) protocols. Ten healthy subjects completed each breathing pattern on two different days in each of two different body positions, supine (SUP) and head-up tilt (HUT). The R-R interval, systolic arterial pressure (SAP) and instantaneous lung volume were recorded continuously. BRG was estimated from the ARMA impulse response relationship of R-R interval to SAP and from the spontaneous sequence method. The results indicated that both the ARMA and spontaneous sequence methods were reproducible (r = 0.76 and r = 0.85, respectively). As expected, BRG was significantly less in the HUT compared to SUP position for both ARMA (mean +/- SEM; 3.5 +/- 0.3 versus 11.2 +/- 1.4 ms mmHg-1; P < 0.01) and spontaneous sequence analysis (10.3 +/- 0.8 versus 31.5 +/- 2.3 ms mmHg-1; P < 0.001). However, no significant difference was found between BRG during RB and SB protocols for either ARMA (7.9 +/- 1.4 versus 6.7 +/- 0.8 ms mmHg-1; P = 0.27) or spontaneous sequence methods (21.8 +/- 2.7 versus 20.0 +/- 2.1 ms mmHg-1; P = 0.24). BRG was correlated during RB and SB protocols (r = 0.80; P < 0.0001). ARMA and spontaneous BRG estimates were correlated (r = 0.79; P < 0.0001), with spontaneous sequence values being consistently larger (P < 0.0001). In conclusion, we have shown that ARMA-derived BRG values are reproducible and that they can be determined during SB conditions, making the ARMA method appropriate for use in a wider range of patients.
Heritability of and Mortality Prediction With a Longevity Phenotype: The Healthy Aging Index
2014-01-01
Background. Longevity-associated genes may modulate risk for age-related diseases and survival. The Healthy Aging Index (HAI) may be a subphenotype of longevity, which can be constructed in many studies for genetic analysis. We investigated the HAI’s association with survival in the Cardiovascular Health Study and heritability in the Long Life Family Study. Methods. The HAI includes systolic blood pressure, pulmonary vital capacity, creatinine, fasting glucose, and Modified Mini-Mental Status Examination score, each scored 0, 1, or 2 using approximate tertiles and summed from 0 (healthy) to 10 (unhealthy). In Cardiovascular Health Study, the association with mortality and accuracy predicting death were determined with Cox proportional hazards analysis and c-statistics, respectively. In Long Life Family Study, heritability was determined with a variance component–based family analysis using a polygenic model. Results. Cardiovascular Health Study participants with unhealthier index scores (7–10) had 2.62-fold (95% confidence interval: 2.22, 3.10) greater mortality than participants with healthier scores (0–2). The HAI alone predicted death moderately well (c-statistic = 0.643, 95% confidence interval: 0.626, 0.661, p < .0001) and slightly worse than age alone (c-statistic = 0.700, 95% confidence interval: 0.684, 0.717, p < .0001; p < .0001 for comparison of c-statistics). Prediction increased significantly with adjustment for demographics, health behaviors, and clinical comorbidities (c-statistic = 0.780, 95% confidence interval: 0.765, 0.794, p < .0001). In Long Life Family Study, the heritability of the HAI was 0.295 (p < .0001) overall, 0.387 (p < .0001) in probands, and 0.238 (p = .0004) in offspring. Conclusion. The HAI should be investigated further as a candidate phenotype for uncovering longevity-associated genes in humans. PMID:23913930
Helfer, Bartosz; Prosser, Aaron; Samara, Myrto T; Geddes, John R; Cipriani, Andrea; Davis, John M; Mavridis, Dimitris; Salanti, Georgia; Leucht, Stefan
2015-04-14
As the number of systematic reviews is growing rapidly, we systematically investigate whether meta-analyses published in leading medical journals present an outline of available evidence by referring to previous meta-analyses and systematic reviews. We searched PubMed for recent meta-analyses of pharmacological treatments published in high impact factor journals. Previous systematic reviews and meta-analyses were identified with electronic searches of keywords and by searching reference sections. We analyzed the number of meta-analyses and systematic reviews that were cited, described and discussed in each recent meta-analysis. Moreover, we investigated publication characteristics that potentially influence the referencing practices. We identified 52 recent meta-analyses and 242 previous meta-analyses on the same topics. Of these, 66% of identified previous meta-analyses were cited, 36% described, and only 20% discussed by recent meta-analyses. The probability of citing a previous meta-analysis was positively associated with its publication in a journal with a higher impact factor (odds ratio, 1.49; 95% confidence interval, 1.06 to 2.10) and more recent publication year (odds ratio, 1.19; 95% confidence interval 1.03 to 1.37). Additionally, the probability of a previous study being described by the recent meta-analysis was inversely associated with the concordance of results (odds ratio, 0.38; 95% confidence interval, 0.17 to 0.88), and the probability of being discussed was increased for previous studies that employed meta-analytic methods (odds ratio, 32.36; 95% confidence interval, 2.00 to 522.85). Meta-analyses on pharmacological treatments do not consistently refer to and discuss findings of previous meta-analyses on the same topic. Such neglect can lead to research waste and be confusing for readers. Journals should make the discussion of related meta-analyses mandatory.
van Gorp, Freek; Duffull, Stephen; Hackett, L Peter; Isbister, Geoffrey K
2012-01-01
AIMS To describe the pharmacokinetics and pharmacodynamics (PKPD) of escitalopram in overdose and its effect on QT prolongation, including the effectiveness of single dose activated charcoal (SDAC). METHODS The data set included 78 escitalopram overdose events (median dose, 140 mg [10–560 mg]). SDAC was administered 1.0 to 2.6 h after 12 overdoses (15%). A fully Bayesian analysis was undertaken in WinBUGS 1.4.3, first for a population pharmacokinetic (PK) analysis followed by a PKPD analysis. The developed PKPD model was used to predict the probability of having an abnormal QT as a surrogate for torsade de pointes. RESULTS A one compartment model with first order input and first-order elimination described the PK data, including uncertainty in dose and a baseline concentration for patients taking escitalopram therapeutically. SDAC reduced the fraction absorbed by 31% and reduced the individual predicted area under the curve adjusted for dose (AUCi/dose). The absolute QT interval was related to the observed heart rate with an estimated individual heart rate correction factor (α = 0.35). The heart rate corrected QT interval (QTc) was linearly dependent on predicted escitalopram concentration [slope = 87 ms/(mg l–1)], using a hypothetical effect-compartment (half-life of effect-delay, 1.0h). Administration of SDAC significantly reduced QT prolongation and was shown to reduce the risk of having an abnormal QT by approximately 35% for escitalopram doses above 200 mg. CONCLUSIONS There was a dose-related lengthening of the QT interval that lagged the increase in drug concentration. SDAC resulted in a moderate reduction in fraction of escitalopram absorbed and reduced the risk of the QT interval being abnormal. PMID:21883384
NASA Astrophysics Data System (ADS)
Chan, J. H.; Richardson, I. S.; Strayer, L. M.; Catchings, R.; McEvilly, A.; Goldman, M.; Criley, C.; Sickler, R. R.
2017-12-01
The Hayward Fault Zone (HFZ) includes the Hayward fault (HF), as well as several named and unnamed subparallel, subsidiary faults to the east, among them the Quaternary-active Chabot Fault (CF), the Miller Creek Fault (MCF), and a heretofore unnamed fault, the Redwood Thrust Fault (RTF). With an ≥M6.0 recurrence interval of 130 y for the HF and the last major earthquake in 1868, the HFZ is a major seismic hazard in the San Francisco Bay Area, exacerbated by the many unknown and potentially active secondary faults of the HFZ. In 2016, researchers from California State University, East Bay, working in concert with the United States Geological Survey conducted the East Bay Seismic Investigation (EBSI). We deployed 296 RefTek RT125 (Texan) seismographs along a 15-km-long linear seismic profile across the HF, extending from the bay in San Leandro to the hills in Castro Valley. Two-channel seismographs were deployed at 100 m intervals to record P- and S-waves, and additional single-channel seismographs were deployed at 20 m intervals where the seismic line crossed mapped faults. The active-source survey consisted of 16 buried explosive shots located at approximately 1-km intervals along the seismic line. We used the Multichannel Analysis of Surfaces Waves (MASW) method to develop 2-D shear-wave velocity models across the CF, MCF, and RTF. Preliminary MASW analysis show areas of anomalously low S-wave velocities , indicating zones of reduced shear modulus, coincident with these three mapped faults; additional velocity anomalies coincide with unmapped faults within the HFZ. Such compliant zones likely correspond to heavily fractured rock surrounding the faults, where the shear modulus is expected to be low compared to the undeformed host rock.
Expansion of Microbial Forensics.
Schmedes, Sarah E; Sajantila, Antti; Budowle, Bruce
2016-08-01
Microbial forensics has been defined as the discipline of applying scientific methods to the analysis of evidence related to bioterrorism, biocrimes, hoaxes, or the accidental release of a biological agent or toxin for attribution purposes. Over the past 15 years, technology, particularly massively parallel sequencing, and bioinformatics advances now allow the characterization of microorganisms for a variety of human forensic applications, such as human identification, body fluid characterization, postmortem interval estimation, and biocrimes involving tracking of infectious agents. Thus, microbial forensics should be more broadly described as the discipline of applying scientific methods to the analysis of microbial evidence in criminal and civil cases for investigative purposes. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
NASA Technical Reports Server (NTRS)
Calkins, D. S.
1998-01-01
When the dependent (or response) variable response variable in an experiment has direction and magnitude, one approach that has been used for statistical analysis involves splitting magnitude and direction and applying univariate statistical techniques to the components. However, such treatment of quantities with direction and magnitude is not justifiable mathematically and can lead to incorrect conclusions about relationships among variables and, as a result, to flawed interpretations. This note discusses a problem with that practice and recommends mathematically correct procedures to be used with dependent variables that have direction and magnitude for 1) computation of mean values, 2) statistical contrasts of and confidence intervals for means, and 3) correlation methods.
Correlation between X-ray flux and rotational acceleration in Vela X-1
NASA Technical Reports Server (NTRS)
Deeter, J. E.; Boynton, P. E.; Shibazaki, N.; Hayakawa, S.; Nagase, F.
1989-01-01
The results of a search for correlations between X-ray flux and angular acceleration for the accreting binary pulsar Vela X-1 are presented. Results are based on data obtained with the Hakucho satellite during the interval 1982 to 1984. In undertaking this correlation analysis, it was necessary to modify the usual statistical method to deal with conditions imposed by generally unavoidable satellite observing constraints, most notably a mismatch in sampling between the two variables. The results are suggestive of a correlation between flux and the absolute value of the angular acceleration, at a significance level of 96 percent. The implications of the methods and results for future observations and analysis are discussed.
Volkmar, Fred R.; Bloch, Michael H.
2012-01-01
OBJECTIVE: The goal of this study was to examine the efficacy of serotonin receptor inhibitors (SRIs) for the treatment of repetitive behaviors in autism spectrum disorders (ASD). METHODS: Two reviewers searched PubMed and Clinicaltrials.gov for randomized, double-blind, placebo-controlled trials evaluating the efficacy of SRIs for repetitive behaviors in ASD. Our primary outcome was mean improvement in ratings scales of repetitive behavior. Publication bias was assessed by using a funnel plot, the Egger’s test, and a meta-regression of sample size and effect size. RESULTS: Our search identified 5 published and 5 unpublished but completed trials eligible for meta-analysis. Meta-analysis of 5 published and 1 unpublished trial (which provided data) demonstrated a small but significant effect of SRI for the treatment of repetitive behaviors in ASD (standardized mean difference: 0.22 [95% confidence interval: 0.07–0.37], z score = 2.87, P < .005). There was significant evidence of publication bias in all analyses. When Duval and Tweedie's trim and fill method was used to adjust for the effect of publication bias, there was no longer a significant benefit of SRI for the treatment of repetitive behaviors in ASD (standardized mean difference: 0.12 [95% confidence interval: –0.02 to 0.27]). Secondary analyses demonstrated no significant effect of type of medication, patient age, method of analysis, trial design, or trial duration on reported SRI efficacy. CONCLUSIONS: Meta-analysis of the published literature suggests a small but significant effect of SRI in the treatment of repetitive behaviors in ASD. This effect may be attributable to selective publication of trial results. Without timely, transparent, and complete disclosure of trial results, it remains difficult to determine the efficacy of available medications. PMID:22529279
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaffer, Richard, E-mail: rickyshaffer@yahoo.co.u; Department of Clinical Oncology, Imperial College London National Health Service Trust, London; Pickles, Tom
Purpose: Prior studies have derived low values of alpha-beta ratio (a/ss) for prostate cancer of approximately 1-2 Gy. These studies used poorly matched groups, differing definitions of biochemical failure, and insufficient follow-up. Methods and Materials: National Comprehensive Cancer Network low- or low-intermediate risk prostate cancer patients, treated with external beam radiotherapy or permanent prostate brachytherapy, were matched for prostate-specific antigen, Gleason score, T-stage, percentage of positive cores, androgen deprivation therapy, and era, yielding 118 patient pairs. The Phoenix definition of biochemical failure was used. The best-fitting value for a/ss was found for up to 90-month follow-up using maximum likelihood analysis,more » and the 95% confidence interval using the profile likelihood method. Linear quadratic formalism was applied with the radiobiological parameters of relative biological effectiveness = 1.0, potential doubling time = 45 days, and repair half-time = 1 hour. Bootstrap analysis was performed to estimate uncertainties in outcomes, and hence in a/ss. Sensitivity analysis was performed by varying the values of the radiobiological parameters to extreme values. Results: The value of a/ss best fitting the outcomes data was >30 Gy, with lower 95% confidence limit of 5.2 Gy. This was confirmed on bootstrap analysis. Varying parameters to extreme values still yielded best-fit a/ss of >30 Gy, although the lower 95% confidence interval limit was reduced to 0.6 Gy. Conclusions: Using carefully matched groups, long follow-up, the Phoenix definition of biochemical failure, and well-established statistical methods, the best estimate of a/ss for low and low-tier intermediate-risk prostate cancer is likely to be higher than that of normal tissues, although a low value cannot be excluded.« less
Issues in the analysis of oligonucleotide tiling microarrays for transcript mapping
NASA Technical Reports Server (NTRS)
Royce, Thomas E.; Rozowsky, Joel S.; Bertone, Paul; Samanta, Manoj; Stolc, Viktor; Weissman, Sherman; Snyder, Michael; Gerstein, Mark
2005-01-01
Traditional microarrays use probes complementary to known genes to quantitate the differential gene expression between two or more conditions. Genomic tiling microarray experiments differ in that probes that span a genomic region at regular intervals are used to detect the presence or absence of transcription. This difference means the same sets of biases and the methods for addressing them are unlikely to be relevant to both types of experiment. We introduce the informatics challenges arising in the analysis of tiling microarray experiments as open problems to the scientific community and present initial approaches for the analysis of this nascent technology.
Binary Interval Search: a scalable algorithm for counting interval intersections.
Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R
2013-01-01
The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.
Low-level lead exposure and the IQ of children. A meta-analysis of modern studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Needleman, H.L.; Gatsonis, C.A.
1990-02-02
We identified 24 modern studies of childhood exposures to lead in relation to IQ. From this population, 12 that employed multiple regression analysis with IQ as the dependent variable and lead as the main effect and that controlled for nonlead covariates were selected for a quantitative, integrated review or meta-analysis. The studies were grouped according to type of tissue analyzed for lead. There were 7 blood and 5 tooth lead studies. Within each group, we obtained joint P values by two different methods and average effect sizes as measured by the partial correlation coefficients. We also investigated the sensitivity ofmore » the results to any single study. The sample sizes ranged from 75 to 724. The sign of the regression coefficient for lead was negative in 11 of 12 studies. The negative partial r's for lead ranged from -.27 to -.003. The power to find an effect was limited, below 0.6 in 7 of 12 studies. The joint P values for the blood lead studies were less than .0001 for both methods of analysis (95% confidence interval for group partial r, -.15 {plus minus} .05), while for the tooth lead studies they were .0005 and .004, respectively (95% confidence interval for group partial r, -.08 {plus minus} .05). The hypothesis that lead impairs children's IQ at low dose is strongly supported by this quantitative review. The effect is robust to the impact of any single study.« less
The examinations of microorganisms by correlation optics method
NASA Astrophysics Data System (ADS)
Bilyi, Olexander I.
2004-06-01
In report described methods of correlation optics, which are based on the analysis of intensity changes of quasielastic light scattering by micro-organisms and allow the type of correlation function to obtain information about the size of dispersive particles. The principle of new optical method of verification is described. In this method the gauging of intensity of an indirect illumination is carried out by static spectroscopy and processing of observed data by a method of correlation spectroscopy. The given mode of gauging allows measuring allocation of micro-organisms in size interval of 0.1 - 10.0 microns. In the report results of examinations of cultures Pseudomonas aeruginosa, Escherichia coli, Micrococcus lutteus, Lamprocystis and Triocapsa bacteriachlorofil are considered.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Suurmond, Robert; van Rhee, Henk; Hak, Tony
2017-12-01
We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.
Resource assessment in Western Australia using a geographic information system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, A.
1991-03-01
Three study areas in Western Australia covering from 77,000 to 425,000 mi{sup 2} were examined for oil and gas potential using a geographic information system (GIS). A data base of source rock thickness, source richness, maturity, and expulsion efficiency was created for each interval. The GIS (Arc/Info) was used to create, manage, and analyze data for each interval in each study area. Source rock thickness and source richness data were added to the data base from digitized data. Maturity information was generated with Arc/Info by combining geochemical and depth to structure data. Expulsion efficiency data was created by a systemmore » level Arc/Info program. After the data base for each interval was built, the GIS was used to analyze the geologic data. The analysis consisted of converting each data layer into a lattice (grid) and using the lattice operation in Arc/Infor (addition, multiplication, division, and subtraction) to combine the data layers. Additional techniques for combining and selecting data were developed using Arc/Info system level programs. The procedure for performing the analyses was written as macros in Arc/Info's macro programming language (AML). The results of the analysis were estimates of oil and gas volumes for each interval. The resultant volumes were produced in tabular form for reports and cartographic form for presentation. The geographic information system provided several clear advantages over traditional methods of resource assessment including simplified management, updating, and editing of geologic data.« less
Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.
Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2012-11-08
A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.
Fetal electrocardiogram (ECG) for fetal monitoring during labour.
Neilson, James P
2015-12-21
Hypoxaemia during labour can alter the shape of the fetal electrocardiogram (ECG) waveform, notably the relation of the PR to RR intervals, and elevation or depression of the ST segment. Technical systems have therefore been developed to monitor the fetal ECG during labour as an adjunct to continuous electronic fetal heart rate monitoring with the aim of improving fetal outcome and minimising unnecessary obstetric interference. To compare the effects of analysis of fetal ECG waveforms during labour with alternative methods of fetal monitoring. The Cochrane Pregnancy and Childbirth Group's Trials Register (latest search 23 September 2015) and reference lists of retrieved studies. Randomised trials comparing fetal ECG waveform analysis with alternative methods of fetal monitoring during labour. One review author independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. One review author assessed the quality of the evidence using the GRADE approach. Seven trials (27,403 women) were included: six trials of ST waveform analysis (26,446 women) and one trial of PR interval analysis (957 women). The trials were generally at low risk of bias for most domains and the quality of evidence for ST waveform analysis trials was graded moderate to high. In comparison to continuous electronic fetal heart rate monitoring alone, the use of adjunctive ST waveform analysis made no obvious difference to primary outcomes: births by caesarean section (risk ratio (RR) 1.02, 95% confidence interval (CI) 0.96 to 1.08; six trials, 26,446 women; high quality evidence); the number of babies with severe metabolic acidosis at birth (cord arterial pH less than 7.05 and base deficit greater than 12 mmol/L) (average RR 0.72, 95% CI 0.43 to 1.20; six trials, 25,682 babies; moderate quality evidence); or babies with neonatal encephalopathy (RR 0.61, 95% CI 0.30 to 1.22; six trials, 26,410 babies; high quality evidence). There were, however, on average fewer fetal scalp samples taken during labour (average RR 0.61, 95% CI 0.41 to 0.91; four trials, 9671 babies; high quality evidence) although the findings were heterogeneous and there were no data from the largest trial (from the USA). There were marginally fewer operative vaginal births (RR 0.92, 95% CI 0.86 to 0.99; six trials, 26,446 women); but no obvious difference in the number of babies with low Apgar scores at five minutes or babies requiring neonatal intubation, or babies requiring admission to the special care unit (RR 0.96, 95% CI 0.89 to 1.04, six trials, 26,410 babies; high quality evidence). There was little evidence that monitoring by PR interval analysis conveyed any benefit of any sort. The modest benefits of fewer fetal scalp samplings during labour (in settings in which this procedure is performed) and fewer instrumental vaginal births have to be considered against the disadvantages of needing to use an internal scalp electrode, after membrane rupture, for ECG waveform recordings. We found little strong evidence that ST waveform analysis had an effect on the primary outcome measures in this systematic review.There was a lack of evidence showing that PR interval analysis improved any outcomes; and a larger future trial may possibly demonstrate beneficial effects.There is little information about the value of fetal ECG waveform monitoring in preterm fetuses in labour. Information about long-term development of the babies included in the trials would be valuable.
Relating interesting quantitative time series patterns with text events and text features
NASA Astrophysics Data System (ADS)
Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.
2013-12-01
In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.
Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H
2014-03-01
In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
On Some Confidence Intervals for Estimating the Mean of a Skewed Population
ERIC Educational Resources Information Center
Shi, W.; Kibria, B. M. Golam
2007-01-01
A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…
Baek, Hyun Jae; Shin, JaeWook
2017-08-15
Most of the wrist-worn devices on the market provide a continuous heart rate measurement function using photoplethysmography, but have not yet provided a function to measure the continuous heart rate variability (HRV) using beat-to-beat pulse interval. The reason for such is the difficulty of measuring a continuous pulse interval during movement using a wearable device because of the nature of photoplethysmography, which is susceptible to motion noise. This study investigated the effect of missing heart beat interval data on the HRV analysis in cases where pulse interval cannot be measured because of movement noise. First, we performed simulations by randomly removing data from the RR interval of the electrocardiogram measured from 39 subjects and observed the changes of the relative and normalized errors for the HRV parameters according to the total length of the missing heart beat interval data. Second, we measured the pulse interval from 20 subjects using a wrist-worn device for 24 h and observed the error value for the missing pulse interval data caused by the movement during actual daily life. The experimental results showed that mean NN and RMSSD were the most robust for the missing heart beat interval data among all the parameters in the time and frequency domains. Most of the pulse interval data could not be obtained during daily life. In other words, the sample number was too small for spectral analysis because of the long missing duration. Therefore, the frequency domain parameters often could not be calculated, except for the sleep state with little motion. The errors of the HRV parameters were proportional to the missing data duration in the presence of missing heart beat interval data. Based on the results of this study, the maximum missing duration for acceptable errors for each parameter is recommended for use when the HRV analysis is performed on a wrist-worn device.
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.; Hawley, Anne; Livingstone, Katie; Mramba, Nona
2004-08-01
Confidence intervals are an important way to assess and estimate a parameter. In the case of biometric identification devices, several approaches to confidence intervals for an error rate have been proposed. Here we evaluate six of these methods. To complete this evaluation, we simulate data from a wide variety of parameter values. This data are simulated via a correlated binary distribution. We then determine how well these methods do at what they say they do: capturing the parameter inside the confidence interval. In addition, the average widths of the various confidence intervals are recorded for each set of parameters. The complete results of this simulation are presented graphically for easy comparison. We conclude by making a recommendation regarding which method performs best.
Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.
Kim, Yuneung; Lim, Johan; Park, DoHwan
2015-11-01
In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Moorin, Rachael E; Holman, C D'Arcy J
2005-01-01
Background The aim of the study was to identify any distinct behavioural patterns in switching between public and privately insured payment classifications between successive episodes of inpatient care within Western Australia between 1980 and 2001 using a novel 'couplet' method of analysing longitudinal data. Methods The WA Data Linkage System was used to extract all hospital morbidity records from 1980 to 2001. For each individual, episodes of hospitalisation were paired into couplets, which were classified according to the sequential combination of public and privately insured episodes. Behavioural patterns were analysed using the mean intra-couplet interval and proportion of discordant couplets in each year. Results Discordant couplets were consistently associated with the longest intra-couplet intervals (ratio to the average annual mean interval being 1.35), while the shortest intra-couplet intervals were associated with public concordant couplets (0.5). Overall, privately insured patients were more likely to switch payment classification at their next admission compared with public patients (the average rate of loss across all age groups being 0.55% and 2.16% respectively). The rate of loss from the privately insured payment classification was inversely associated with time between episodes (2.49% for intervals of 0 to 13 years and 0.83% for intervals of 14 to 21 years). In all age groups, the average rate of loss from the privately insured payment classification was greater between 1981 and 1990 compared with that between 1991 and 2001 (3.45% and 3.10% per year respectively). Conclusion A small but statistically significant reduction in rate of switching away from PHI over the latter period of observation indicated that health care policies encouraging uptake of PHI implemented in the 1990s by the federal government had some of their intended impact on behaviour. PMID:15978139
NASA Astrophysics Data System (ADS)
Fu, Haiyan; Yin, Qiaobo; Xu, Lu; Wang, Weizheng; Chen, Feng; Yang, Tianming
2017-07-01
The origins and authenticity against frauds are two essential aspects of food quality. In this work, a comprehensive quality evaluation method by FT-NIR spectroscopy and chemometrics were suggested to address the geographical origins and authentication of Chinese Ganoderma lucidum (GL). Classification for 25 groups of GL samples (7 common species from 15 producing areas) was performed using near-infrared spectroscopy and interval-combination One-Versus-One least squares support vector machine (IC-OVO-LS-SVM). Untargeted analysis of 4 adulterants of cheaper mushrooms was performed by one-class partial least squares (OCPLS) modeling for each of the 7 GL species. After outlier diagnosis and comparing the influences of different preprocessing methods and spectral intervals on classification, IC-OVO-LS-SVM with standard normal variate (SNV) spectra obtained a total classification accuracy of 0.9317, an average sensitivity and specificity of 0.9306 and 0.9971, respectively. With SNV or second-order derivative (D2) spectra, OCPLS could detect at least 2% or more doping levels of adulterants for 5 of the 7 GL species and 5% or more doping levels for the other 2 GL species. This study demonstrates the feasibility of using new chemometrics and NIR spectroscopy for fine classification of GL geographical origins and species as well as for untargeted analysis of multiple adulterants.
Method of analysis of local neuronal circuits in the vertebrate central nervous system.
Reinis, S; Weiss, D S; McGaraughty, S; Tsoukatos, J
1992-06-01
Although a considerable amount of knowledge has been accumulated about the activity of individual nerve cells in the brain, little is known about their mutual interactions at the local level. The method presented in this paper allows the reconstruction of functional relations within a group of neurons as recorded by a single microelectrode. Data are sampled at 10 or 13 kHz. Prominent spikes produced by one or more single cells are selected and sorted by K-means cluster analysis. The activities of single cells are then related to the background firing of neurons in their vicinity. Auto-correlograms of the leading cells, auto-correlograms of the background cells (mass correlograms) and cross-correlograms between these two levels of firing are computed and evaluated. The statistical probability of mutual interactions is determined, and the statistically significant, most common interspike intervals are stored and attributed to real pairs of spikes in the original record. Selected pairs of spikes, characterized by statistically significant intervals between them, are then assembled into a working model of the system. This method has revealed substantial differences between the information processing in the visual cortex, the inferior colliculus, the rostral ventromedial medulla and the ventrobasal complex of the thalamus. Even short 1-s records of the multiple neuronal activity may provide meaningful and statistically significant results.
Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno
2008-01-01
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631
Method and apparatus for frequency spectrum analysis
NASA Technical Reports Server (NTRS)
Cole, Steven W. (Inventor)
1992-01-01
A method for frequency spectrum analysis of an unknown signal in real-time is discussed. The method is based upon integration of 1-bit samples of signal voltage amplitude corresponding to sine or cosine phases of a controlled center frequency clock which is changed after each integration interval to sweep the frequency range of interest in steps. Integration of samples during each interval is carried out over a number of cycles of the center frequency clock spanning a number of cycles of an input signal to be analyzed. The invention may be used to detect the frequency of at least two signals simultaneously. By using a reference signal of known frequency and voltage amplitude (added to the two signals for parallel processing in the same way, but in a different channel with a sampling at the known frequency and phases of the reference signal), the absolute voltage amplitude of the other two signals may be determined by squaring the sine and cosine integrals of each channel and summing the squares to obtain relative power measurements in all three channels and, from the known voltage amplitude of the reference signal, obtaining an absolute voltage measurement for the other two signals by multiplying the known voltage of the reference signal with the ratio of the relative power of each of the other two signals to the relative power of the reference signal.
Newgard, Craig D.; Schmicker, Robert H.; Hedges, Jerris R.; Trickett, John P.; Davis, Daniel P.; Bulger, Eileen M.; Aufderheide, Tom P.; Minei, Joseph P.; Hata, J. Steven; Gubler, K. Dean; Brown, Todd B.; Yelle, Jean-Denis; Bardarson, Berit; Nichol, Graham
2010-01-01
Study objective The first hour after the onset of out-of-hospital traumatic injury is referred to as the “golden hour,” yet the relationship between time and outcome remains unclear. We evaluate the association between emergency medical services (EMS) intervals and mortality among trauma patients with field-based physiologic abnormality. Methods This was a secondary analysis of an out-of-hospital, prospective cohort registry of adult (aged ≥15 years) trauma patients transported by 146 EMS agencies to 51 Level I and II trauma hospitals in 10 sites across North America from December 1, 2005, through March 31, 2007. Inclusion criteria were systolic blood pressure less than or equal to 90 mm Hg, respiratory rate less than 10 or greater than 29 breaths/min, Glasgow Coma Scale score less than or equal to 12, or advanced airway intervention. The outcome was inhospital mortality. We evaluated EMS intervals (activation, response, on-scene, transport, and total time) with logistic regression and 2-step instrumental variable models, adjusted for field-based confounders. Results There were 3,656 trauma patients available for analysis, of whom 806 (22.0%) died. In multivariable analyses, there was no significant association between time and mortality for any EMS interval: activation (odds ratio [OR] 1.00; 95% confidence interval [CI] 0.95 to 1.05), response (OR 1.00; 95% CI 9.97 to 1.04), on-scene (OR 1.00; 95% CI 0.99 to 1.01), transport (OR 1.00; 95% CI 0.98 to 1.01), or total EMS time (OR 1.00; 95% CI 0.99 to 1.01). Subgroup and instrumental variable analyses did not qualitatively change these findings. Conclusion In this North American sample, there was no association between EMS intervals and mortality among injured patients with physiologic abnormality in the field. PMID:19783323
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
Evaluating the efficiency of environmental monitoring programs
Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina
2014-01-01
Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.
NASA Astrophysics Data System (ADS)
Bo, Zhang; Li, Jin-Ling; Wang, Guan-Gli
2002-01-01
We checked the dependence of the estimation of parameters on the choice of piecewise interval in the continuous piecewise linear modeling of the residual clock and atmosphere effects by single analysis of 27 VLBI experiments involving Shanghai station (Seshan 25m). The following are tentatively shown: (1) Different choices of the piecewise interval lead to differences in the estimation of station coordinates and in the weighted root mean squares ( wrms ) of the delay residuals, which can be of the order of centimeters or dozens of picoseconds respectively. So the choice of piecewise interval should not be arbitrary . (2) The piecewise interval should not be too long, otherwise the short - term variations in the residual clock and atmospheric effects can not be properly modeled. While in order to maintain enough degrees of freedom in parameter estimation, the interval can not be too short, otherwise the normal equation may become near or solely singular and the noises can not be constrained as well. Therefore the choice of the interval should be within some reasonable range. (3) Since the conditions of clock and atmosphere are different from experiment to experiment and from station to station, the reasonable range of the piecewise interval should be tested and chosen separately for each experiment as well as for each station by real data analysis. This is really arduous work in routine data analysis. (4) Generally speaking, with the default interval for clock as 60min, the reasonable range of piecewise interval for residual atmospheric effect modeling is between 10min to 40min, while with the default interval for atmosphere as 20min, that for residual clock behavior is between 20min to 100min.
Advanced Interval Management: A Benefit Analysis
NASA Technical Reports Server (NTRS)
Timer, Sebastian; Peters, Mark
2016-01-01
This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.
Rasch analysis of the Edmonton Symptom Assessment System and research implications
Cheifetz, O.; Packham, T.L.; MacDermid, J.C.
2014-01-01
Background Reliable and valid assessment of the disease burden across all forms of cancer is critical to the evaluation of treatment effectiveness and patient progress. The Edmonton Symptom Assessment System (esas) is used for routine evaluation of people attending for cancer care. In the present study, we used Rasch analysis to explore the measurement properties of the esas and to determine the effect of using Rasch-proposed interval-level esas scoring compared with traditional scoring when evaluating the effects of an exercise program for cancer survivors. Methods Polytomous Rasch analysis (Andrich’s rating-scale model) was applied to data from 26,645 esas questionnaires completed at the Juravinski Cancer Centre. The fit of the esas to the polytomous Rasch model was investigated, including evaluations of differential item functioning for sex, age, and disease group. The research implication was investigated by comparing the results of an observational research study previously analysed using a traditional approach with the results obtained by Rasch-proposed interval-level esas scoring. Results The Rasch reliability index was 0.73, falling short of the desired 0.80–0.90 level. However, the esas was found to fit the Rasch model, including the criteria for uni-dimensional data. The analysis suggests that the current esas scoring system of 0–10 could be collapsed to a 6-point scale. Use of the Rasch-proposed interval-level scoring yielded results that were different from those calculated using summarized ordinal-level esas scores. Differential item functioning was not found for sex, age, or diagnosis groups. Conclusions The esas is a moderately reliable uni-dimensional measure of cancer disease burden and can provide interval-level scaling with Rasch-based scoring. Further, our study indicates that, compared with the traditional scoring metric, Rasch-based scoring could result in substantive changes to conclusions. PMID:24764703
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Prognostic value of long noncoding RNA MALAT1 in digestive system malignancies
Zhai, Hui; Li, Xiao-Mei; Maimaiti, Ailifeire; Chen, Qing-Jie; Liao, Wu; Lai, Hong-Mei; Liu, Fen; Yang, Yi-Ning
2015-01-01
Background: MALAT1, a newly discovered long noncoding RNA (lncRNA), has been reported to be highly expressed in many types of cancers. This meta-analysis summarizes its potential prognostic value in digestive system malignancies. Methods: A quantitative meta-analysis was performed through a systematic search in PubMed, Cochrane Library, Web of Science and Chinese National Knowledge Infrastructure (CNKI) for eligible papers on the prognostic impact of MALAT1 in digestive system malignancies from inception to Apr. 25, 2015. Pooled hazard ratios (HRs) with 95% confidence interval (95% CI) were calculated to summarize the effect. Results: Five studies were included in the study, with a total of 527 patients. A significant association was observed between MALAT1 abundance and poor overall survival (OS) of patients with digestive system malignancies, with pooled hazard ratio (HR) of 7.68 (95% confidence interval [CI]: 4.32-13.66, P<0.001). Meta sensitivity analysis suggested the reliability of our findings. No publication bias was observed. Conclusions: MALAT1 abundance may serve as a novel predictive factor for poor prognosis in patients with digestive system malignancies. PMID:26770406
NASA Astrophysics Data System (ADS)
Bogachev, Mikhail I.; Kireenkov, Igor S.; Nifontov, Eugene M.; Bunde, Armin
2009-06-01
We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function PQ(r) of the return intervals. As a consequence, the probability WQ(t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.
NASA Astrophysics Data System (ADS)
Herrera-Oliva, C. S.
2013-05-01
In this work we design and implement a method for the determination of precipitation forecast through the application of an elementary neuronal network (perceptron) to the statistical analysis of the precipitation reported in catalogues. The method is limited mainly by the catalogue length (and, in a smaller degree, by its accuracy). The method performance is measured using grading functions that evaluate a tradeoff between positive and negative aspects of performance. The method is applied to the Guadalupe Valley, Baja California, Mexico. Using consecutive intervals of dt=0.1 year, employing the data of several climatological stations situated in and surrounding this important wine industries zone. We evaluated the performance of different models of ANN, whose variables of entrance are the heights of precipitation. The results obtained were satisfactory, except for exceptional values of rain. Key words: precipitation forecast, artificial neural networks, statistical analysis
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
Statewide analysis of the drainage-area ratio method for 34 streamflow percentile ranges in Texas
Asquith, William H.; Roussel, Meghan C.; Vrabel, Joseph
2006-01-01
The drainage-area ratio method commonly is used to estimate streamflow for sites where no streamflow data are available using data from one or more nearby streamflow-gaging stations. The method is intuitive and straightforward to implement and is in widespread use by analysts and managers of surface-water resources. The method equates the ratio of streamflow at two stream locations to the ratio of the respective drainage areas. In practice, unity often is assumed as the exponent on the drainage-area ratio, and unity also is assumed as a multiplicative bias correction. These two assumptions are evaluated in this investigation through statewide analysis of daily mean streamflow in Texas. The investigation was made by the U.S. Geological Survey in cooperation with the Texas Commission on Environmental Quality. More than 7.8 million values of daily mean streamflow for 712 U.S. Geological Survey streamflow-gaging stations in Texas were analyzed. To account for the influence of streamflow probability on the drainage-area ratio method, 34 percentile ranges were considered. The 34 ranges are the 4 quartiles (0-25, 25-50, 50-75, and 75-100 percent), the 5 intervals of the lower tail of the streamflow distribution (0-1, 1-2, 2-3, 3-4, and 4-5 percent), the 20 quintiles of the 4 quartiles (0-5, 5-10, 10-15, 15-20, 20-25, 25-30, 30-35, 35-40, 40-45, 45-50, 50-55, 55-60, 60-65, 65-70, 70-75, 75-80, 80-85, 85-90, 90-95, and 95-100 percent), and the 5 intervals of the upper tail of the streamflow distribution (95-96, 96-97, 97-98, 98-99 and 99-100 percent). For each of the 253,116 (712X711/2) unique pairings of stations and for each of the 34 percentile ranges, the concurrent daily mean streamflow values available for the two stations provided for station-pair application of the drainage-area ratio method. For each station pair, specific statistical summarization (median, mean, and standard deviation) of both the exponent and bias-correction components of the drainage-area ratio method were computed. Statewide statistics (median, mean, and standard deviation) of the station-pair specific statistics subsequently were computed and are tabulated herein. A separate analysis considered conditioning station pairs to those stations within 100 miles of each other and with the absolute value of the logarithm (base-10) of the ratio of the drainage areas greater than or equal to 0.25. Statewide statistics of the conditional station-pair specific statistics were computed and are tabulated. The conditional analysis is preferable because of the anticipation that small separation distances reflect similar hydrologic conditions and the observation of large variation in exponent estimates for similar-sized drainage areas. The conditional analysis determined that the exponent is about 0.89 for streamflow percentiles from 0 to about 50 percent, is about 0.92 for percentiles from about 50 to about 65 percent, and is about 0.93 for percentiles from about 65 to about 85 percent. The exponent decreases rapidly to about 0.70 for percentiles nearing 100 percent. The computation of the bias-correction factor is sensitive to the range analysis interval (range of streamflow percentile); however, evidence suggests that in practice the drainage-area method can be considered unbiased. Finally, for general application, suggested values of the exponent are tabulated for 54 percentiles of daily mean streamflow in Texas; when these values are used, the bias correction is unity.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
NASA Technical Reports Server (NTRS)
Roth, D. J.; Swickard, S. M.; Stang, D. B.; Deguire, M. R.
1990-01-01
A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semi-empirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produced predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis for fully-dense materials are in good agreement with those calculated from elastic properties.
Cost Analysis of a Novel Enzymatic Debriding Agent for Management of Burn Wounds.
Giudice, Giuseppe; Filoni, Angela; Maggio, Giulio; Bonamonte, Domenico; Vestita, Michelangelo
2017-01-01
Introduction . Given its efficacy and safety, NexoBrid™ (NXB) has become part of our therapeutic options in burns treatment with satisfactory results. However, no cost analysis comparing NXB to the standard of care (SOC) has been carried out as of today. Aim . To assess the cost of treatment with NXB and compare it to the SOC cost. Methods . 20 patients with 14-22% of TBSA with an intermediate-deep thermal burn related injury were retrospectively and consecutively included. 10 of these patients were treated with the SOC, while the other 10 with NXB. The cost analysis was performed in accordance with the weighted average Italian Health Ministry DRGs and with Conferenza Stato/Regioni 2003 and the study by Tan et al. For each cost, 95% confidence intervals have been evaluated. Results . Considering the 10 patients treated with NXB, the overall savings (total net saving) amounted to 53300 euros. The confidence interval analysis confirmed the savings. Discussion . As shown by our preliminary results, significant savings are obtained with the use of NXB. The limit of our study is that it is based on Italian health care costs and assesses a relative small cohort of patients. Further studies on larger multinational cohorts are warranted.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
Shin, S M; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
Objectives: To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. Methods: The sample included 24 female and 19 male patients with hand–wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Results: Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Conclusions: Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index. PMID:25411713
Lundblad, Runar; Abdelnoor, Michel; Svennevig, Jan Ludvig
2004-09-01
Simple linear resection and endoventricular patch plasty are alternative techniques to repair postinfarction left ventricular aneurysm. The aim of the study was to compare these 2 methods with regard to early mortality and long-term survival. We retrospectively reviewed 159 patients undergoing operations between 1989 and 2003. The epidemiologic design was of an exposed (simple linear repair, n = 74) versus nonexposed (endoventricular patch plasty, n = 85) cohort with 2 endpoints: early mortality and long-term survival. The crude effect of aneurysm repair technique versus endpoint was estimated by odds ratio, rate ratio, or relative risk and their 95% confidence intervals. Stratification analysis by using the Mantel-Haenszel method was done to quantify confounders and pinpoint effect modifiers. Adjustment for multiconfounders was performed by using logistic regression and Cox regression analysis. Survival curves were analyzed with the Breslow test and the log-rank test. Early mortality was 8.2% for all patients, 13.5% after linear repair and 3.5% after endoventricular patch plasty. When adjusted for multiconfounders, the risk of early mortality was significantly higher after simple linear repair than after endoventricular patch plasty (odds ratio, 4.4; 95% confidence interval, 1.1-17.8). Mean follow-up was 5.8 +/- 3.8 years (range, 0-14.0 years). Overall 5-year cumulative survival was 78%, 70.1% after linear repair and 91.4% after endoventricular patch plasty. The risk of total mortality was significantly higher after linear repair than after endoventricular patch plasty when controlled for multiconfounders (relative risk, 4.5; 95% confidence interval, 2.0-9.7). Linear repair dominated early in the series and patch plasty dominated later, giving a possible learning-curve bias in favor of patch plasty that could not be adjusted for in the regression analysis. Postinfarction left ventricular aneurysm can be repaired with satisfactory early and late results. Surgical risk was lower and long-term survival was higher after endoventricular patch plasty than simple linear repair. Differences in outcome should be interpreted with care because of the retrospective study design and the chronology of the 2 repair methods.
Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.
Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu
2016-08-01
The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.
An empirical comparison of SPM preprocessing parameters to the analysis of fMRI data.
Della-Maggiore, Valeria; Chau, Wilkin; Peres-Neto, Pedro R; McIntosh, Anthony R
2002-09-01
We present the results from two sets of Monte Carlo simulations aimed at evaluating the robustness of some preprocessing parameters of SPM99 for the analysis of functional magnetic resonance imaging (fMRI). Statistical robustness was estimated by implementing parametric and nonparametric simulation approaches based on the images obtained from an event-related fMRI experiment. Simulated datasets were tested for combinations of the following parameters: basis function, global scaling, low-pass filter, high-pass filter and autoregressive modeling of serial autocorrelation. Based on single-subject SPM analysis, we derived the following conclusions that may serve as a guide for initial analysis of fMRI data using SPM99: (1) The canonical hemodynamic response function is a more reliable basis function to model the fMRI time series than HRF with time derivative. (2) Global scaling should be avoided since it may significantly decrease the power depending on the experimental design. (3) The use of a high-pass filter may be beneficial for event-related designs with fixed interstimulus intervals. (4) When dealing with fMRI time series with short interstimulus intervals (<8 s), the use of first-order autoregressive model is recommended over a low-pass filter (HRF) because it reduces the risk of inferential bias while providing a relatively good power. For datasets with interstimulus intervals longer than 8 seconds, temporal smoothing is not recommended since it decreases power. While the generalizability of our results may be limited, the methods we employed can be easily implemented by other scientists to determine the best parameter combination to analyze their data.
NASA Astrophysics Data System (ADS)
Chen, Hsinchun; Roco, Mihail C.; Son, Jaebong; Jiang, Shan; Larson, Catherine A.; Gao, Qiang
2013-09-01
In a relatively short interval for an emerging technology, nanotechnology has made a significant economic impact in numerous sectors including semiconductor manufacturing, catalysts, medicine, agriculture, and energy production. A part of the United States (US) government investment in basic research has been realized in the last two decades through the National Science Foundation (NSF), beginning with the nanoparticle research initiative in 1991 and continuing with support from the National Nanotechnology Initiative after fiscal year 2001. This paper has two main goals: (a) present a longitudinal analysis of the global nanotechnology development as reflected in the United States Patent and Trade Office (USPTO) patents and Web of Science (WoS) publications in nanoscale science and engineering (NSE) for the interval 1991-2012; and (b) identify the effect of basic research funded by NSF on both indicators. The interval has been separated into three parts for comparison purposes: 1991-2000, 2001-2010, and 2011-2012. The global trends of patents and scientific publications are presented. Bibliometric analysis, topic analysis, and citation network analysis methods are used to rank countries, institutions, technology subfields, and inventors contributing to nanotechnology development. We then, examined how these entities were affected by NSF funding and how they evolved over the past two decades. Results show that dedicated NSF funding used to support nanotechnology R&D was followed by an increased number of relevant patents and scientific publications, a greater diversity of technology topics, and a significant increase of citations. The NSF played important roles in the inventor community and served as a major contributor to numerous nanotechnology subfields.
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
Browne, Erica N; Rathinam, Sivakumar R; Kanakath, Anuradha; Thundikandy, Radhika; Babu, Manohar; Lietman, Thomas M; Acharya, Nisha R
2017-01-01
Purpose To conduct a Bayesian analysis of a randomized clinical trial (RCT) for non-infectious uveitis using expert opinion as a subjective prior belief. Methods A RCT was conducted to determine which antimetabolite, methotrexate or mycophenolate mofetil, is more effective as an initial corticosteroid-sparing agent for the treatment of intermediate, posterior, and pan- uveitis. Before the release of trial results, expert opinion on the relative effectiveness of these two medications was collected via online survey. Members of the American Uveitis Society executive committee were invited to provide an estimate for the relative decrease in efficacy with a 95% credible interval (CrI). A prior probability distribution was created from experts’ estimates. A Bayesian analysis was performed using the constructed expert prior probability distribution and the trial’s primary outcome. Results 11 of 12 invited uveitis specialists provided estimates. Eight of 11 experts (73%) believed mycophenolate mofetil is more effective. The group prior belief was that the odds of treatment success for patients taking mycophenolate mofetil were 1.4-fold the odds of those taking methotrexate (95% CrI 0.03 – 45.0). The odds of treatment success with mycophenolate mofetil compared to methotrexate was 0.4 from the RCT (95% confidence interval 0.1–1.2) and 0.7 (95% CrI 0.2–1.7) from the Bayesian analysis. Conclusions A Bayesian analysis combining expert belief with the trial’s result did not indicate preference for one drug. However, the wide credible interval leaves open the possibility of a substantial treatment effect. This suggests clinical equipoise necessary to allow a larger, more definitive RCT. PMID:27982726
Choi, Hyang-Ki; Jung, Jin Ah; Fujita, Tomoe; Amano, Hideki; Ghim, Jong-Lyul; Lee, Dong-Hwan; Tabata, Kenichi; Song, Il-Dae; Maeda, Mika; Kumagai, Yuji; Mendzelevski, Boaz; Shin, Jae-Gook
2016-12-01
The goal of this study was to evaluate the moxifloxacin-induced QT interval prolongation in healthy male and female Korean and Japanese volunteers to investigate interethnic differences. This multicenter, randomized, double-blind, placebo-controlled, 2-way crossover study was conducted in healthy male and female Korean and Japanese volunteers. In each period, a single dose of moxifloxacin or placebo 400 mg was administered orally under fasting conditions. Triplicate 12-lead ECGs were recorded at defined time points before, up to 24 hours after dosing, and at corresponding time points during baseline. Serial blood sampling was conducted for pharmacokinetic analysis of moxifloxacin. The pharmacokinetic-pharmacodynamic data between the 2 ethnic groups were compared by using a typical analysis based on the intersection-union test and a nonlinear mixed effects method. A total of 39 healthy subjects (Korean, male: 10, female: 10; Japanese, male: 10, female: 9) were included in the analysis. The concentration-effect analysis revealed that there was no change in slope (and confirmed that the difference was caused by a change in the pharmacokinetic model of moxifloxacin). A 2-compartment model with first-order absorption provided the best description of moxifloxacin's pharmacokinetic parameters. Weight and sex were selected as significant covariates for central volume of distribution and intercompartmental clearance, respectively. An E max model (E[C]=[E max ⋅C]/[EC 50 +C]) described the QT interval data of this study well. However, ethnicity was not found to be a significant factor in a pharmacokinetic-pharmacodynamic link model. The drug-induced QTc prolongations evaluated using moxifloxacin as the probe did not seem to be significantly different between these Korean and Japanese subjects. ClinicalTrials.gov identifier: NCT01876316. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Paiva, F. M.; Batista, J. C.; Rêgo, F. S. C.; Lima, J. A.; Freire, P. T. C.; Melo, F. E. A.; Mendes Filho, J.; de Menezes, A. S.; Nogueira, C. E. S.
2017-01-01
Single crystals of DL-valine and DL-lysine hydrochloride were grown by slow evaporation method and the crystallographic structure were confirmed by X-ray diffraction experiment and Rietveld method. These two crystals have been studied by Raman spectroscopy in the 25-3600 cm-1 spectral range and by infrared spectroscopy through the interval 375-4000 cm-1 at room temperature. Experimental and theoretical vibrational spectra were compared and a complete analysis of the modes was done in terms of the Potential Energy Distribution (PED).
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
Arabski, Michał; Wasik, Sławomir; Piskulak, Patrycja; Góźdź, Natalia; Slezak, Andrzej; Kaca, Wiesław
2011-01-01
The aim of this study was to analysis of antibiotics (ampicilin, streptomycin, ciprofloxacin or colistin) release from agarose gel by spectrophotmetry and laser interferometry methods. The interferometric system consisted of a Mach-Zehnder interferometer with a He-Ne laser, TV-CCD camera, computerised data acquisition system and a gel system. The gel system under study consists of two cuvettes. We filled the lower cuvette with an aqueous 1% agarose solution with the antibiotics at initial concentration of antibiotics in the range of 0.12-2 mg/ml for spectrophotmetry analysis or 0.05-0.5 mg/ml for laser interferometry methods, while in the upper cuvette there was pure water. The diffusion was analysed from 120 to 2400 s with a time interval of deltat = 120 s by both methods. We observed that 0.25-1 mg/ml and 0,05 mg/ml are minimal initial concentrations detected by spectrophotometric and laser interferometry methods, respectively. Additionally, we observed differences in kinetic of antibiotic diffusion from gel measured by both methods. In conclusion, the laser interferometric method is a useful tool for studies of antibiotic release from agarose gel, especially for substances are not fully soluble in water, for example: colistin.
Method and apparatus for assessing cardiovascular risk
NASA Technical Reports Server (NTRS)
Albrecht, Paul (Inventor); Bigger, J. Thomas (Inventor); Cohen, Richard J. (Inventor)
1998-01-01
The method for assessing risk of an adverse clinical event includes detecting a physiologic signal in the subject and determining from the physiologic signal a sequence of intervals corresponding to time intervals between heart beats. The long-time structure of fluctuations in the intervals over a time period of more than fifteen minutes is analyzed to assess risk of an adverse clinical event. In a preferred embodiment, the physiologic signal is an electrocardiogram and the time period is at least fifteen minutes. A preferred method for analyzing the long-time structure variability in the intervals includes computing the power spectrum and fitting the power spectrum to a power law dependence on frequency over a selected frequency range such as 10.sup.-4 to 10.sup.-2 Hz. Characteristics of the long-time structure fluctuations in the intervals is used to assess risk of an adverse clinical event.
Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems
Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong
2014-01-01
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method. PMID:24695916
Interval neutrosophic sets and their application in multicriteria decision making problems.
Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong
2014-01-01
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.
Interval data clustering using self-organizing maps based on adaptive Mahalanobis distances.
Hajjar, Chantal; Hamdan, Hani
2013-10-01
The self-organizing map is a kind of artificial neural network used to map high dimensional data into a low dimensional space. This paper presents a self-organizing map for interval-valued data based on adaptive Mahalanobis distances in order to do clustering of interval data with topology preservation. Two methods based on the batch training algorithm for the self-organizing maps are proposed. The first method uses a common Mahalanobis distance for all clusters. In the second method, the algorithm starts with a common Mahalanobis distance per cluster and then switches to use a different distance per cluster. This process allows a more adapted clustering for the given data set. The performances of the proposed methods are compared and discussed using artificial and real interval data sets. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tang, Zhongwen
2015-01-01
An analytical way to compute predictive probability of success (PPOS) together with credible interval at interim analysis (IA) is developed for big clinical trials with time-to-event endpoints. The method takes account of the fixed data up to IA, the amount of uncertainty in future data, and uncertainty about parameters. Predictive power is a special type of PPOS. The result is confirmed by simulation. An optimal design is proposed by finding optimal combination of analysis time and futility cutoff based on some PPOS criteria.