Fiori, Simone
2007-01-01
Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641
Bivariate statistical modeling of color and range in natural scenes
NASA Astrophysics Data System (ADS)
Su, Che-Chun; Cormack, Lawrence K.; Bovik, Alan C.
2014-02-01
The statistical properties embedded in visual stimuli from the surrounding environment guide and affect the evolutionary processes of human vision systems. There are strong statistical relationships between co-located luminance/chrominance and disparity bandpass coefficients in natural scenes. However, these statistical rela- tionships have only been deeply developed to create point-wise statistical models, although there exist spatial dependencies between adjacent pixels in both 2D color images and range maps. Here we study the bivariate statistics of the joint and conditional distributions of spatially adjacent bandpass responses on both luminance/chrominance and range data of naturalistic scenes. We deploy bivariate generalized Gaussian distributions to model the underlying statistics. The analysis and modeling results show that there exist important and useful statistical properties of both joint and conditional distributions, which can be reliably described by the corresponding bivariate generalized Gaussian models. Furthermore, by utilizing these robust bivariate models, we are able to incorporate measurements of bivariate statistics between spatially adjacent luminance/chrominance and range information into various 3D image/video and computer vision applications, e.g., quality assessment, 2D-to-3D conversion, etc.
We compared the use of ternary and bivariate diagrams to distinguish the effects of atmospheric precipitation, rock weathering, and evaporation on inland surface and subsurface water chemistry. The three processes could not be statistically differentiated using bivariate models e...
NASA Astrophysics Data System (ADS)
Baran, Sándor; Möller, Annette
2016-06-01
Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.
Application of bivariate statistics to full wine bottle diamagnetic screening data.
Harley, S J; Lim, V; Augustine, M P
2012-01-30
A bivariate correlated Student distribution is applied to full wine bottle diamagnetic screening measurements. Previous work involving a limited number of rare wines indicated that like wines cluster in a plot of the first two principal component scores derived from a covariance matrix of the diamagnetic screening measurements. This study extends the approach to a much larger, statistically meaningful sixty bottle wine library where bivariate statistics are used to comment on the measured data. The full bottle diamagnetic screening of thirty-six identically labeled, sealed bottles of wine obtained from four different sources combined with principal component analysis data reduction followed by treatment with a bivariate distribution permit the effect of wine transport and storage to be observed. The usefulness and future success of the method towards the identification of counterfeit wines is mentioned. PMID:22284521
Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM
ERIC Educational Resources Information Center
Warner, Rebecca M.
2007-01-01
This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
Packard, Gary C
2013-09-01
The ongoing debate about methods for fitting the two-parameter allometric equation y=ax(b) to bivariate data seemed to be resolved recently when three groups of investigators independently reported that statistical models fitted by the traditional allometric method (i.e., by back-transforming a linear model fitted to log-log transformations) typically are superior to models fitted by standard nonlinear regression. However, the narrow focus for the statistical analyses in these investigations compromised the most important of the ensuing conclusions. All the investigations focused on two-parameter power functions and excluded from consideration other simple functions that might better describe pattern in the data; and all relied on Akaike's Information Criterion instead of graphical validation to identify the better statistical model. My re-analysis of data from one of the studies (BMR vs. body mass in mustelid carnivores) revealed (1) that the best descriptor for pattern in the dataset is a straight line and not a two-parameter power function; (2) that a model with additive, normal, heteroscedastic error is superior to one with multiplicative, lognormal, heteroscedastic error; and (3) that Akaike's Information Criterion is not a generally reliable metric for discriminating between models fitted to different distributions. These findings have apparent implications for interpreting the outcomes of all three of the aforementioned studies. Future investigations of allometric variation should adopt a more holistic approach to analysis and not be wedded to the traditional allometric method. PMID:23688506
Slack, James Richard; Matalas, Nicholas C.; Wallis, James R.
1976-01-01
The distribution functions for hydrologic statistics that may be used to assess the significance of differences between sample means, standard deviations, coefficients of skewness, and coefficients of variations are obtained by Monte Carlo experiments. The distributions are expressed as functions of sample size, cross correlation, and skewness. In general, the distributions are more sensitive to cross correlation than to skewness. As sample size increases, however, the distributions tend to become more sensitive to skewness. (Woodard-USGS)
Univariate description and bivariate statistical inference: the first step delving into data
2016-01-01
In observational studies, the first step is usually to explore data distribution and the baseline differences between groups. Data description includes their central tendency (e.g., mean, median, and mode) and dispersion (e.g., standard deviation, range, interquartile range). There are varieties of bivariate statistical inference methods such as Student’s t-test, Mann-Whitney U test and Chi-square test, for normal, skews and categorical data, respectively. The article shows how to perform these analyses with R codes. Furthermore, I believe that the automation of the whole workflow is of paramount importance in that (I) it allows for others to repeat your results; (II) you can easily find out how you performed analysis during revision; (III) it spares data input by hand and is less error-prone; and (IV) when you correct your original dataset, the final result can be automatically corrected by executing the codes. Therefore, the process of making a publication quality table incorporating all abovementioned statistics and P values is provided, allowing readers to customize these codes to their own needs. PMID:27047950
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Daniel, Larry G.; Roberts, J. Kyle
The purpose of this paper is to illustrate how displaying disattenuated correlation coefficients along with their unadjusted counterparts will allow the reader to assess the impact of unreliability on each bivariate relationship. The paper also demonstrates how a proposed new "what if reliability" analysis can complement the conventional null…
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Roberts, J. Kyle; Daniel, Larry G.
2005-01-01
In this article, the authors (a) illustrate how displaying disattenuated correlation coefficients alongside their unadjusted counterparts will allow researchers to assess the impact of unreliability on bivariate relationships and (b) demonstrate how a proposed new "what if reliability" analysis can complement null hypothesis significance tests of…
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. PMID:22616629
NASA Astrophysics Data System (ADS)
Tipton, J.; Hooten, M.; Pederson, N.; Tingley, M.; Bishop, D. A.
2014-12-01
The ability to reconstruct historical climate is important to understanding how climate has changed in the past. The instrumental record of temperature and precipitation only spans the most recent centuries. Thus, reconstructions of the climate features are typically based on proxy archives. The proxy archives integrate climate information through biological, geological, physical, and chemical processes. Tree ring widths provide one of the most spatially and temporally rich sources of high quality climate proxy data. However, the statistical reconstruction of paleoclimate from tree ring widths is quite challenging because the climate signal is inherently multi-dimensional while tree ring widths are a one dimensional data source. We propose a Bayesian Hierarchical model using a non-linear, scientifically motivated tree ring growth models to reconstruct multivariate climate (i.e., temperature and precipitation) in the Hudson Valley region of New York. Our proposed model extends and enhances former methods in a number of ways. We allow for species-specific responses to climate, which further constrains the many-to-one relationship between tree rings and climate. The resulting model allows for prediction of reasonable climate scenarios given tree ring widths. We explore a natural model selection framework that weighs the influence of multiple candidate growth models in terms of their predictive ability. To enable prediction backcasts, the climate variables are modeled with an underlying continuous time latent process. The continuous time process allows for added flexibility in the climate response through time at different temporal scales and enables investigation of differences in climate between the reconstruction period and the instrumental period. Validation of the model's predictive abilities is achieved through a pseudo-proxy simulation experiment where the quality of climate predictions are measured by out of sample performance based on a proper local scoring
NASA Technical Reports Server (NTRS)
Falls, L. W.; Crutcher, H. L.
1976-01-01
Transformation of statistics from a dimensional set to another dimensional set involves linear functions of the original set of statistics. Similarly, linear functions will transform statistics within a dimensional set such that the new statistics are relevant to a new set of coordinate axes. A restricted case of the latter is the rotation of axes in a coordinate system involving any two correlated random variables. A special case is the transformation for horizontal wind distributions. Wind statistics are usually provided in terms of wind speed and direction (measured clockwise from north) or in east-west and north-south components. A direct application of this technique allows the determination of appropriate wind statistics parallel and normal to any preselected flight path of a space vehicle. Among the constraints for launching space vehicles are critical values selected from the distribution of the expected winds parallel to and normal to the flight path. These procedures are applied to space vehicle launches at Cape Kennedy, Florida.
NASA Astrophysics Data System (ADS)
Akgun, Aykut; Erkan, Oguzhan
2015-04-01
In Turkey, landslide is one of the most important natural hazards. Due to landslide occurrence, several landforms and man made structures are adversely affected, and may cause many injuries and loss of life. In this context, landslide susceptibility assessment is important task to determine susceptible areas to landslide occurrence. Especially, several dam reservoir areas in Turkey are threated by landslide phenomena. For this reason, in this study, a dam reservoir area located in North Turkey was selected, and investigated in point of landslide susceptibility assessment. A landslide susceptibility assessment for the Kurtun dam reservoir area (Gumushane, North Turkey) was carried out by geographical information systems (GIS)-based statistical and deterministic models. For this purpose, frequency ratio (FR) and stability index mapping (SINMAP) methodologies were applied. In this context, eight conditioning parameters such as altitude, lithology, slope gradient, slope aspect, distance to drainage, distance to lineament, stream power index (SPI) and topographical wetness index (TWI) were considered. After assessment of these parameters by FR and SINMAP methods in a GIS environment, two landslide susceptibility maps were obtained. Then, the maps obtained were analyzed for verification purpose. For this purpose, area under curvature (AUC) approach was used. At the end of this process, the AUC values of 0.73 and 0.70 were found for FR and SINMAP methods, respectively. Additionally, the SINMAP statistical results showed that the 93.8% of the observed landslides in the area falls into the lower and upper threshold showing the stability index classes. These values indicate that the accuracies of landslide susceptibility maps are acceptable, and the maps are feasible for further natural hazard management affairs in the area.
NASA Astrophysics Data System (ADS)
Meinhardt, Markus; Fink, Manfred; Tünschel, Hannes
2015-04-01
Vietnam is regarded as a country strongly impacted by climate change. Population and economic growth result in additional pressures on the ecosystems in the region. In particular, changes in landuse and precipitation extremes lead to a higher landslide susceptibility in the study area (approx. 12,400 km2), located in central Vietnam and impacted by a tropical monsoon climate. Hence, this natural hazard is a serious problem in the study area. A probability assessment of landslides is therefore undertaken through the use of bivariate statistics. However, the landslide inventory based only on field campaigns does not cover the whole area. To avoid a systematic bias due to the limited mapping area, the investigated regions are depicted as the viewshed in the calculations. On this basis, the distribution of the landslides is evaluated in relation to the maps of 13 parameters, showing the strongest correlation to distance to roads and precipitation increase. An additional weighting of the input parameters leads to better results, since some parameters contribute more to landslides than others. The method developed in this work is based on the validation of different parameter sets used within the statistical index method. It is called "omit error" because always omitting another parameter leads to the weightings, which describe how strong every single parameter improves or reduces the objective function. Furthermore, this approach is used to find a better input parameter set by excluding some parameters. After this optimization, nine input parameters are left, and they are weighted by the omit error method, providing the best susceptibility map with a success rate of 92.9% and a prediction rate of 92.3%. This is an improvement of 4.4% and 4.2%, respectively, compared to the basic statistical index method with the 13 input parameters.
NASA Astrophysics Data System (ADS)
Hong, Haoyuan; Pourghasemi, Hamid Reza; Pourtaghi, Zohre Sadat
2016-04-01
Landslides are an important natural hazard that causes a great amount of damage around the world every year, especially during the rainy season. The Lianhua area is located in the middle of China's southern mountainous area, west of Jiangxi Province, and is known to be an area prone to landslides. The aim of this study was to evaluate and compare landslide susceptibility maps produced using the random forest (RF) data mining technique with those produced by bivariate (evidential belief function and frequency ratio) and multivariate (logistic regression) statistical models for Lianhua County, China. First, a landslide inventory map was prepared using aerial photograph interpretation, satellite images, and extensive field surveys. In total, 163 landslide events were recognized in the study area, with 114 landslides (70%) used for training and 49 landslides (30%) used for validation. Next, the landslide conditioning factors-including the slope angle, altitude, slope aspect, topographic wetness index (TWI), slope-length (LS), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, annual precipitation, land use, normalized difference vegetation index (NDVI), and lithology-were derived from the spatial database. Finally, the landslide susceptibility maps of Lianhua County were generated in ArcGIS 10.1 based on the random forest (RF), evidential belief function (EBF), frequency ratio (FR), and logistic regression (LR) approaches and were validated using a receiver operating characteristic (ROC) curve. The ROC plot assessment results showed that for landslide susceptibility maps produced using the EBF, FR, LR, and RF models, the area under the curve (AUC) values were 0.8122, 0.8134, 0.7751, and 0.7172, respectively. Therefore, we can conclude that all four models have an AUC of more than 0.70 and can be used in landslide susceptibility mapping in the study area; meanwhile, the EBF and FR models had the best performance for Lianhua
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Local osmosis and isotonic transport.
Mathias, R T; Wang, H
2005-11-01
Osmotically driven water flow, u (cm/s), between two solutions of identical osmolarity, c(o) (300 mM: in mammals), has a theoretical isotonic maximum given by u = j/c(o), where j (moles/cm(2)/s) is the rate of salt transport. In many experimental studies, transport was found to be indistinguishable from isotonic. The purpose of this work is to investigate the conditions for u to approach isotonic. A necessary condition is that the membrane salt/water permeability ratio, epsilon, must be small: typical physiological values are epsilon = 10(-3) to 10(-5), so epsilon is generally small but this is not sufficient to guarantee near-isotonic transport. If we consider the simplest model of two series membranes, which secrete a tear or drop of sweat (i.e., there are no externally-imposed boundary conditions on the secretion), diffusion is negligible and the predicted osmolarities are: basal = c(o), intracellular approximately (1 + epsilon)c(o), secretion approximately (1 + 2epsilon)c(o), and u approximately (1 - 2epsilon)j/c(o). Note that this model is also appropriate when the transported solution is experimentally collected. Thus, in the absence of external boundary conditions, transport is experimentally indistinguishable from isotonic. However, if external boundary conditions set salt concentrations to c(o) on both sides of the epithelium, then fluid transport depends on distributed osmotic gradients in lateral spaces. If lateral spaces are too short and wide, diffusion dominates convection, reduces osmotic gradients and fluid flow is significantly less than isotonic. Moreover, because apical and basolateral membrane water fluxes are linked by the intracellular osmolarity, water flow is maximum when the total water permeability of basolateral membranes equals that of apical membranes. In the context of the renal proximal tubule, data suggest it is transporting at near optimal conditions. Nevertheless, typical physiological values suggest the newly filtered fluid is
Bivariate control chart with copula
NASA Astrophysics Data System (ADS)
Lestari, Tika; Syuhada, Khreshna; Mukhaiyar, Utriweni
2015-12-01
Control chart is the main and powerful tool in statistical process control in order to detect and classify data, either in control or out of control. Its concept, basically, refers to the theory of prediction interval. Accordingly, in this paper, we aim at constructing of what so called predictive bivariate control charts, both classical and Copula-based ones. We argue that appropriate joint distribution function may be well estimated by employing Copula. A numerical analysis is carried out to illustrate that a Copula-based control chart outperforms than other.
Five-Parameter Bivariate Probability Distribution
NASA Technical Reports Server (NTRS)
Tubbs, J.; Brewer, D.; Smith, O. W.
1986-01-01
NASA technical memorandum presents four papers about five-parameter bivariate gamma class of probability distributions. With some overlap of subject matter, papers address different aspects of theories of these distributions and use in forming statistical models of such phenomena as wind gusts. Provides acceptable results for defining constraints in problems designing aircraft and spacecraft to withstand large wind-gust loads.
Isotonic water transport in secretory epithelia.
Swanson, C H
1977-01-01
The model proposed by Diamond and Bossert [1] for isotonic water transport has received wide acceptance in recent years. It assumes that the local driving force for water transport is a standing osmotic gradient produced in the lateral intercellular spaces of the epithelial cell layer by active solute transport. While this model is based on work done in absorptive epithelia where the closed to open direction of the lateral space and the direction of net transport are the same, it has been proposed that the lateral spaces could also serve as the site of the local osmotic gradients for water transport in secretory epithelia, where the closed to open direction of the lateral space and net transport are opposed, by actively transporting solute out of the space rather than into it. Operation in the backward direction, however, requires a lower than ambient hydrostatic pressure within the lateral space which would seem more likely to cause the space to collapse with loss of function. On the other hand, most secretory epithelia are characterized by transport into a restricted ductal system which is similar to the lateral intercellular space in the absorptive epithelia in that its closed to open direction is the same as that of net transport. In vitro micropuncture studies on the exocrine pancreas of the rabbit indicate the presence of a small but statistically significant increase in juice osmolality, 6 mOsm/kg H(2)O, at the site of electrolyte and water secretion in the smallest extralobular ducts with secretin stimulation which suggests that the ductal system in the secretory epithelia rather than the lateral intercellular space is the site of the local osmotic gradients responsible for isotonic water transport. PMID:331693
Covariate analysis of bivariate survival data
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Alternative geographical display strategies for bivariate relationships
Waterhouse, J.C.; Farrell, M.P.; Strand, R.H.
1981-01-01
The complexity of regional water management problems necessitates the summarization of water resource data in a form that is easily analyzed visually. Computer generated maps are well suited for displaying complex relationships since they are easy to create and modify and are relatively inexpensive compared to traditional techniques. Development of new display techniques creates questions about the most effective display of data used for generic assessments. Recent attempts at displaying bivariate data geographically have not been entirely successful due to numerous data categories and complex color patterns used in cross mapping techniques. This paper offers an alternate method of displaying bivariate data which uses the degree of differences between two variables (e.g., water supply and water demand) to create a scale in which shades of two primary colors are used for positive and negative differences while a third color is used for the null case (i.e., within the confidence limits of the statistical model.)
NASA Astrophysics Data System (ADS)
Piedade, Aldina; Alves, Tiago; Luís Zêzere, José
2016-04-01
Mass Transport Deposits (MTDs) are one of the most important process shaping passive and active margins. It is frequently happening and its characteristics, features and processes has been very well documented from diverse approaches and methodologies. In this work a methodology for evaluation of MTDs occurrence is tested in an area offshore Espírito Santo Basin, SE Brazil. MTDs inventory was made on three-dimensional (3D) seismic volume interpreting a high amplitude reflection which correspond to the top and base of the MTDs. The inventory consists of four MTDs which were integrated into a GIS database. MTDs favourability scores are computed using algorithms based on statistical/probabilistic analysis (Information Value Method) over unique condition terrain units in a raster basis. Terrain attributes derived from the Digital Terrain Model (DTM) are interpreted as proxies of driving factors of MTDs and are used as predictors in our models which are based on a set of different MTDs inventories. Three models are elaborated independently according to the area of the MTDs body (Model 1, Model 2 and Model 3). The final result is prepared by sorting all pixels according to the pixel favourability value in descending order. The robustness and accuracy of the MTDs favourability models are evaluated by the success-rate curves, which are used for the quantitative interpretation of the models expressing the goodness of fit of the MTDs. In addition, a sensitivity analysis was performed and the predisposing factors which have highest prediction performance on MTDs occurrence were identified. The obtained results allow to conclude the method is valid to apply to submarine slopes as it is demonstrated by the highest obtained goodness of fit (0.862). This work is very pioneer, the methodology used was never applied to submarine environment. It is a very promising and valid methodology within the prediction of submarine slopes regarding failing and instability to the industry. In
Collective structure of the N=40 isotones
Gaudefroy, L.; Peru, S.; Pillet, N.; Hilaire, S.; Delaroche, J.-P.; Girod, M.; Obertelli, A.
2009-12-15
The structure of even-even N=40 isotones is studied from drip line to drip line through the systematic investigation of their quadrupole modes of excitation. Calculations are performed within the Hartree-Fock-Bogoliubov approach using the Gogny D1S effective interaction. Where relevant, these calculations are extended beyond mean field within a generator-coordinate-based method. An overall good agreement with available experimental data is reported, showing that collectivity increases from the neutron to the proton drip line. Whereas {sup 60}Ca and {sup 68}Ni display a calculated spherical shape in their ground states, all other isotones show a prolate-deformed ground-state band and a quasi-{gamma} band. Coexistence features are predicted in the neutron-deficient N=40 isotones above {sup 74}Se.
Some properties of a 5-parameter bivariate probability distribution
NASA Technical Reports Server (NTRS)
Tubbs, J. D.; Brewer, D. W.; Smith, O. E.
1983-01-01
A five-parameter bivariate gamma distribution having two shape parameters, two location parameters and a correlation parameter was developed. This more general bivariate gamma distribution reduces to the known four-parameter distribution. The five-parameter distribution gives a better fit to the gust data. The statistical properties of this general bivariate gamma distribution and a hypothesis test were investigated. Although these developments have come too late in the Shuttle program to be used directly as design criteria for ascent wind gust loads, the new wind gust model has helped to explain the wind profile conditions which cause large dynamic loads. Other potential applications of the newly developed five-parameter bivariate gamma distribution are in the areas of reliability theory, signal noise, and vibration mechanics.
Bivariate Kumaraswamy distribution with an application on earthquake data
Özel, Gamze
2015-03-10
Bivariate Kumaraswamy (BK) distribution whose marginals are Kumaraswamy distributions has been recently introduced. However, its statistical properties are not studied in detail. In this study, statistical properties of the BK distribution are investigated. We suggest that the BK could provide suitable description for the earthquakes characteristics of Turkey. We support this argument using earthquakesoccurred in Turkey between 1900 and 2009. We also find that the BK distribution simulates earthquakes well.
Mechanism of isotonic water transport in glands.
Ussing, H H; Eskesen, K
1989-07-01
Since water and electrolytes pass cell membranes via separate channels, there can be no interactions in the membranes, and osmotic interactions between water and solutes can be expressed as the product of solute flux, frictional coefficient of solute, and length of pathway. It becomes clear that isotonic transport via a cell is impossible. In glands, where cation-selective junctions impede anion flux between the cells, isotonic water transport is only possible if sodium, after having passed the junction, is reabsorbed in the acinus and returned to the serosal side. Thus it can be recycled via the cation-selective junction and exert its drag on water more than once. This hypothesis was tested on frog skin glands. Skins were mounted in flux chambers with identical Ringer solutions on both sides. Na channels of the principal cells were closed with amiloride in the outside solution, and secretion stimulated with noradrenaline in the inside solution. Influx and efflux of Na, K and Br (used as tracer for Cl) were measured on paired half-skins during the constant-secretion phase. Flux ratios for both Na and K were higher than expected for electrodiffusion, indicating outgoing solvent drag. Flux ratios for K were much higher than those for Na. This is an agreement with the concept that Na is reabsorbed in the acinus and K is not. Two independent expressions for the degree of sodium recycling are developed. Under all experimental conditions these expressions give values for the recycling which are in good agreement. PMID:2473601
Chernyshev, Andrey V; Tarasov, Peter A; Semianov, Konstantin A; Nekrasov, Vyacheslav M; Hoekstra, Alfons G; Maltsev, Valeri P
2008-03-01
A mathematical model of erythrocyte lysis in isotonic solution of ammonium chloride is presented in frames of a statistical approach. The model is used to evaluate several parameters of mature erythrocytes (volume, surface area, hemoglobin concentration, number of anionic exchangers on membrane, elasticity and critical tension of membrane) through their sphering and lysis measured by a scanning flow cytometer (SFC). SFC allows measuring the light-scattering pattern (indicatrix) of an individual cell over the angular range from 10 degrees to 60 degrees . Comparison of the experimentally measured and theoretically calculated light scattering patterns allows discrimination of spherical from non-spherical erythrocytes and evaluation of volume and hemoglobin concentration for individual spherical cells. Three different processes were applied for erythrocytes sphering: (1) colloid osmotic lysis in isotonic solution of ammonium chloride, (2) isovolumetric sphering in the presence of sodium dodecyl sulphate and albumin in neutrally buffered isotonic saline, and (3) osmotic fragility test in hypotonic media. For the hemolysis in ammonium chloride, the evolution of distributions of sphered erythrocytes on volume and hemoglobin content was monitored in real-time experiments. The analysis of experimental data was performed in the context of a statistical approach, taking into account that parameters of erythrocytes vary from cell to cell. PMID:18083194
A bivariate pseudo Gamma distribution with application to acid rain data
NASA Astrophysics Data System (ADS)
Pilz, J.; Mohsin, M.; Gebhardt, A.
2012-04-01
Univariate and bivariate Gamma distributions are extensively used for statistical modeling in climatology. In this paper, a bivariate pseudo Gamma distribution is used to model the proportion of acidity and major ions in rain. The model parameters of the bivariate pseudo Gamma distribution are estimated by the maximum likelihood method. The plots of the distribution of the proportions are compared to the histograms of the observed data of the proportions of acidity and major ions in rain. The fitted pdf appears to follow the general pattern in the histograms closely.
Laberko, E L; Bogomil'sky, M R; Soldatsky, Yu L; Pogosova, I E
2016-01-01
The objective of the present study was to evaluate the influence of an isotonic saline solution containing benzalconium chloride and of a hypertonic seawater solution on the function of ciliary epithelium in the nasal cavity in vitro. To this effect, we investigated the cytological material obtained from 35 children presenting with adenoid tissue hypertrophy. The tissue samples were taken from the nasal cavity by the standard method. A cellular biopsy obtained from each patient was distributed between three tubes that contained isotonic saline solution supplemented by benzalconium chloride (0.1 mg/ml), a hypertonic seawater solution, and a standard physiological saline solution. It was shown that the number of the viable cells in both isotonic solutions was statistically comparable and significantly higher than in the hypertonic solution (p<0.05). The ciliary beat frequency of the cells embedded in the two isotonic solutions was not significantly different but considerably exceeded that in the hypertonic seawater solution (p<0.05). Thus, the present study has demonstrated the absence of the ciliotoxic influence of isotonic saline solution containing benzalconium chloride at a concentration of 0.1 mg/ml and the strong ciliotoxic effect of the hypertonic seawater solution. This finding gives reason to recommend isotonic solutions for the regular application whereas hypertonic solutions can be prescribed only during infectious and/or inflammatory ENT diseases. PMID:27213656
A new bivariate negative binomial regression model
NASA Astrophysics Data System (ADS)
Faroughi, Pouya; Ismail, Noriszura
2014-12-01
This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.
Nonparametric Analysis of Bivariate Gap Time with Competing Risks
Huang, Chiung-Yu; Wang, Chenguang; Wang, Mei-Cheng
2016-01-01
Summary This article considers nonparametric methods for studying recurrent disease and death with competing risks. We first point out that comparisons based on the well-known cumulative incidence function can be confounded by different prevalence rates of the competing events, and that comparisons of the conditional distribution of the survival time given the failure event type are more relevant for investigating the prognosis of different patterns of recurrence disease. We then propose nonparametric estimators for the conditional cumulative incidence function as well as the conditional bivariate cumulative incidence function for the bivariate gap times, that is, the time to disease recurrence and the residual lifetime after recurrence. To quantify the association between the two gap times in the competing risks setting, a modified Kendall’s tau statistic is proposed. The proposed estimators for the conditional bivariate cumulative incidence distribution and the association measure account for the induced dependent censoring for the second gap time. Uniform consistency and weak convergence of the proposed estimators are established. Hypothesis testing procedures for two-sample comparisons are discussed. Numerical simulation studies with practical sample sizes are conducted to evaluate the performance of the proposed nonparametric estimators and tests. An application to data from a pancreatic cancer study is presented to illustrate the methods developed in this article. PMID:26990686
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
Mineral Composition and Nutritive Value of Isotonic and Energy Drinks.
Leśniewicz, Anna; Grzesiak, Magdalena; Żyrnicki, Wiesław; Borkowska-Burnecka, Jolanta
2016-04-01
Several very popular brands of isotonic and energy drinks consumed for fluid and electrolyte supplementation and stimulation of mental or physical alertness were chosen for investigation. Liquid beverages available in polyethylene bottles and aluminum cans as well as products in the form of tablets and powder in sachets were studied. The total concentrations of 21 elements (Ag, Al, B, Ba, Ca, Cd, Co, Cr, Cu, Fe, Mg, Mn, Mo, Na, Ni, P, Pb, Sr, Ti, V, and Zn), both essential and toxic, were simultaneously determined in preconcentrated drink samples by inductively coupled plasma-optical emission spectrometry (ICP-OES) equipped with pneumatic and ultrasonic nebulizers. Differences between the mineral compositions of isotonic and energy drinks were evaluated and discussed. The highest content of Na was found in both isotonic and energy drinks, whereas quite high concentrations of Mg were found in isotonic drinks, and the highest amount of calcium was quantified in energy drinks. The concentrations of B, Co, Cu, Ni, and P were higher in isotonic drinks, but energy drinks contained greater quantities of Ag, Cr, Zn, Mn, and Mo and toxic elements, as Cd and Pb. A comparison of element contents with micronutrient intake and tolerable levels was performed to evaluate contribution of the investigated beverages to the daily diet. The consumption of 250 cm(3) of an isotonic drink provides from 0.32 % (for Mn) up to 14.8 % (for Na) of the recommended daily intake. For the energy drinks, the maximum recommended daily intake fulfillment ranged from 0.02 % (for V) to 19.4 or 19.8 % (for Mg and Na). PMID:26286964
A microscopic explanation of the isotonic multiplet at N=90
Gupta, J. B.
2014-08-14
The shape phase transition from spherical to soft deformed at N=88-90 was observed long ago. After the prediction of the X(5) symmetry, for which analytical solution of the nuclear Hamiltonian is given [1], good examples of X(5) nuclei were identified in the N=90 isotones of Nd, Sm, Gd and Dy, in the recent works. The N=90 isotones have almost the similar deformed level structure, forming the isotonic multiplet in Z=50-66, N=82-104 quadrant. This is explained microscopically in terms of the Nilsson level diagram. Using the Dynamic Pairing-Plus-Quadrupole model of Kumar-Baranger, the quadrupole deformation and the occupancies of the neutrons and protons in these nuclei have been calculated, which support the formation of N=88, 90 isotonic multiplets. The existence of F-spin multiplets in Z=66-82, N=82-104 quadrant, identified in earlier works on the Interacting Boson Model, is also explained in our study.
An Annotated Bibliography of Isotonic Weight-Training Methods.
ERIC Educational Resources Information Center
Wysong, John V.
This literature study was conducted to compare and evaluate various types and techniques of weight lifting so that a weight lifting program could be selected or devised for a secondary school. Annotations of 32 research reports, journal articles, and monographs on isotonic strength training are presented. The literature in the first part of the…
Proton pygmy resonances: Predictions for N = 20 isotones
NASA Astrophysics Data System (ADS)
Kim, Y.; Papakonstantinou, P.
2016-06-01
We study theoretically the low-energy electric-dipole response of N = 20 isotones. We present results from a quasiparticle random-phase approximation (QRPA) and a continuum random-phase approximation (CRPA), and we compare them with results for the mirror Z = 20 nuclei. According to our analysis, enhanced E1 strength is expected energetically well below the giant dipole resonance in the proton-rich isotones. Large amounts of E1 strength in the asymmetric N = 20 isotones are predicted, markedly unlike their equally asymmetric Z = 20 mirror nuclei, pointing unambiguously to the role of structural effects such as loose binding. A proton-skin oscillation could develop especially in 46Fe . The isoscalar response is predicted strong in all isotones. The proper description of non-localized threshold transitions and the nucleon effective mass in mean-field treatments may affect theoretical predictions. We call for systematic theoretical investigations to quantify the role of bulk-matter properties in anticipation of measurements of E1 transitions in proton-rich nuclei.
Survival Analysis using Bivariate Archimedean Copulas
NASA Astrophysics Data System (ADS)
Chandra, Krishnendu
In this dissertation we solve the nonidentifiability problem of Archimedean copula models based on dependent censored data (see [Wang, 2012]). We give a set of identifiability conditions for a special class of bivariate frailty models. Our simulation results show that our proposed model is identifiable under our proposed conditions. We use EM algorithm to estimate unknown parameters and the proposed estimation approach can be applied to fit dependent censored data when the dependence is of research interest. The marginal survival functions can be estimated using the copula-graphic estimator (see [Zheng and Klein, 1995] and [Rivest and Wells, 2001]) or the estimator proposed by [Wang, 2014]. We also propose two model selection procedures for Archimedean copula models, one for uncensored data and the other one for right censored bivariate data. Our simulation results are similar to that of [Wang and Wells, 2000] and suggest that both procedures work quite well. The idea of our proposed model selection procedure originates from the model selection procedure for Archimedean copula models proposed by [Wang and Wells, 2000] for right censored bivariate data using the L2 norm corresponding to the Kendall distribution function. A suitable bootstrap procedure is yet to be suggested for our method. We further propose a new parameter estimator and a simple goodness-of-fit test for Archimedean copula models when the bivariate data is under fixed left truncation. Our simulation results suggest that our procedure needs to be improved so that it can be more powerful, reliable and efficient. In our strategy, to obtain estimates for the unknown parameters, we heavily exploit the concept of truncated tau (a measure of association established by [Manatunga and Oakes, 1996] for left truncated data). The idea of our goodness of fit test originates from the goodness-of-fit test for Archimedean copula models proposed by [Wang, 2010] for right censored bivariate data.
Bivariate flood frequency analyses using Copula function
NASA Astrophysics Data System (ADS)
Sraj, Mojca; Bezak, Nejc; Brilly, Mitja
2013-04-01
The objective of the study was (1) to perform all steps in flood frequency analyses using Copula approach, (2) to select the most appropriate Copula function and (3) to evaluate the conditional bivariate return periods for the next pairs of variables: peak-volume, volume-duration and peak-duration, respectively. Flood frequency analyses are usually made by univariate distribution functions and in most cases only peaks are considered in analyses. However, hydrological processes are multidimensional, so it is reasonable to consider more than one variable in analyses. Different marginal distributions can be used for Copula modelling. Copula function successfully models dependence between two or more depended variables and determination of marginal distributions and Copula selection are two separate processes. Hydrological station Litija on the Sava river is one of the oldest stations in Slovenia and it lies in eastern part of country. 58 years of annual maximums were used for analyses and three-points graphical method was used for base flow separation. The log-Pearson type 3 distribution was selected as marginal distribution of peaks and durations, the Pearson type 3 distribution was chosen as marginal distribution of volumes. Some frequently used Copula functions from the Archimedean (Gumbel-Hougaard, Frank, Joe, Clayton, BB1 and Ali-Mikhail-Haq), Elliptical (Student-t and Normal) and Extreme value (Galambos, Hüsler-Reiss and Tawn) families were applied to the data. Copula parameters were estimated with the method of moments based on the inversion of Kendall's tau and with the maximum likelihood method. Graphical and statistical test were applied for the comparison of different Copula functions. For the pair peak-duration the Kendall correlation coefficient was negative and only Copulas able to model negative dependence were used. The Gumbel-Hougaard, Frank and Ali-Mikhail-Haq Copulas were selected as optimal based on tests results for the pairs: peak-volume, volume
Review of some results in bivariate density estimation
NASA Technical Reports Server (NTRS)
Scott, D. W.
1982-01-01
Results are reviewed for choosing smoothing parameters for some bivariate density estimators. Experience gained in comparing the effects of smoothing parameters on probability density estimators for univariate and bivariate data is summarized.
Predicting Number of Zombies in a DDoS Attacks Using Isotonic Regression
NASA Astrophysics Data System (ADS)
Gupta, B. B.; Jamali, Nadeem
Anomaly based DDoS detection systems construct profile of the traffic normally seen in the network, and identify anomalies whenever traffic deviate from normal profile beyond a threshold. This deviation in traffic beyond threshold is used in the past for DDoS detection but not for finding number of zombies. This chapter presents an approach that utilizes this deviation in traffic to predict number of zombies using isotonic regression model. A relationship is established between number of zombies and observed deviation in sample entropy. Internet type topologies used for simulation are generated using Transit-Stub model of GT-ITM topology generator. NS-2 network simulator on Linux platform is used as simulation test bed for launching DDoS attacks with varied number of zombies. Various statistical performance measures are used to measure the performance of the regression model. The simulation results are promising as we are able to predict number of zombies efficiently with very less error rate using isotonic regression model.
Mitsui, Toshio; Takai, Nobukatsu; Ohshima, Hiroyuki
2011-01-01
Mitsui and Ohshima (2008) criticized the power-stroke model for muscle contraction and proposed a new model. In the new model, about 41% of the myosin heads are bound to actin filaments, and each bound head forms a complex MA3 with three actin molecules A1, A2 and A3 forming the crossbridge. The complex translates along the actin filament cooperating with each other. The new model well explained the experimental data on the steady filament sliding. As an extension of the study, the isometric tension transient and isotonic velocity transient are investigated. Statistical ensemble of crossbridges is introduced, and variation of the binding probability of myosin head to A1 is considered. When the binding probability to A1 is zero, the Hill-type force-velocity relation is resulted in. When the binding probability to A1 becomes finite, the deviation from the Hill-type force-velocity relation takes place, as observed by Edman (1988). The characteristics of the isometric tension transient observed by Ford, Huxley and Simmons (1977) and of the isotonic velocity transient observed by Civan and Podolsky (1966) are theoretically reproduced. Ratios of the extensibility are estimated as 0.22 for the crossbridge, 0.26 for the myosin filament and 0.52 for the actin filament, in consistency with the values determined by X-ray diffraction by Wakabayashi et al. (1994). PMID:21673917
Isotopic Ratio, Isotonic Ratio, Isobaric Ratio and Shannon Information Uncertainty
NASA Astrophysics Data System (ADS)
Ma, Chun-Wang; Wei, Hui-Ling
2014-11-01
The isoscaling and the isobaric yield ratio difference (IBD) probes, both of which are constructed by yield ratio of fragment, provide cancelation of parameters. The information entropy theory is introduced to explain the physical meaning of the isoscaling and IBD probes. The similarity between the isoscaling and IBD results is found, i.e., the information uncertainty determined by the IBD method equals to β - α determined by the isoscaling (α (β) is the parameter fitted from the isotopic (isotonic) yield ratio).
Bivariate copula in fitting rainfall data
NASA Astrophysics Data System (ADS)
Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui
2014-07-01
The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).
Multiple imputation methods for bivariate outcomes in cluster randomised trials.
DiazOrdaz, K; Kenward, M G; Gomes, M; Grieve, R
2016-09-10
Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single-level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost-effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing-at-random clustered data scenarios were simulated following a full-factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed-effects multiple imputation and too low following single-level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26990655
Nonparametric causal inference for bivariate time series
NASA Astrophysics Data System (ADS)
McCracken, James M.; Weigel, Robert S.
2016-02-01
We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do not depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Quantitative texton sequences for legible bivariate maps.
Ware, Colin
2009-01-01
Representing bivariate scalar maps is a common but difficult visualization problem. One solution has been to use two dimensional color schemes, but the results are often hard to interpret and inaccurately read. An alternative is to use a color sequence for one variable and a texture sequence for another. This has been used, for example, in geology, but much less studied than the two dimensional color scheme, although theory suggests that it should lead to easier perceptual separation of information relating to the two variables. To make a texture sequence more clearly readable the concept of the quantitative texton sequence (QTonS) is introduced. A QTonS is defined a sequence of small graphical elements, called textons, where each texton represents a different numerical value and sets of textons can be densely displayed to produce visually differentiable textures. An experiment was carried out to compare two bivariate color coding schemes with two schemes using QTonS for one bivariate map component and a color sequence for the other. Two different key designs were investigated (a key being a sequence of colors or textures used in obtaining quantitative values from a map). The first design used two separate keys, one for each dimension, in order to measure how accurately subjects could independently estimate the underlying scalar variables. The second key design was two dimensional and intended to measure the overall integral accuracy that could be obtained. The results show that the accuracy is substantially higher for the QTonS/color sequence schemes. A hypothesis that texture/color sequence combinations are better for independent judgments of mapped quantities was supported. A second experiment probed the limits of spatial resolution for QTonSs. PMID:19834229
Bivariate analysis of flood peaks and volumes using copulas. An application to the Danube River
NASA Astrophysics Data System (ADS)
Papaioannou, George; Bacigal, Tomas; Jeneiova, Katarina; Kohnová, Silvia; Szolgay, Jan; Loukas, Athanasios
2014-05-01
A multivariate analysis on flood variables such as flood peaks, volumes and durations, is essential for hydrotechnical projects design. A lot of authors have suggested the use of bivariate distributions for the frequency analysis of flood peaks and volumes due to the supposition that the marginal probability distribution type is the same for these variables. The application of Copulas, which are becoming gradually widespread, can overcome this constraint. The selection of the appropriate copula type/families is not extensively treated in the literature and it remains a challenge in copula analysis. In this study a bivariate copula analysis with the use of different copula families is carried out on the basis of flood peak and the corresponding volumes along a river. This bivariate analysis of flood peaks and volumes is based on streamflow daily data of a time-series more than 100 years from several gauged stations of the Danube River. The methodology applied using annual maximum flood peaks (AMF) with the independent annual maximum volumes of fixed durations at 5, 10, 15,20,25,30 and 60 days. The discharge-volume pairs correlation are examined using Kendall's tau correlation analysis. The copulas families that selected for the bivariate modeling of the extracted pairs discharge and volumes are the Archimedean, Extreme-value and other copula families. The evaluation of the copulas performance achieved with the use of scatterplots of the observed and bootstrapped simulated pairs and formal tests of goodness of fit. Suitability of copulas was statistically compared. Archimedean (e.g. Frank and Clayton) copulas revealed to be more capable for bivariate modeling of floods than the other examined copula families at the Danube River. Results showed in general that copulas are effective tools for bivariate modeling of the two study random variables.
The role of paracellular pathways in isotonic fluid transport.
Schultz, S G
1977-01-01
Paracellular pathways across "leaky" epithelia are the major route for transepithelial ionic diffusion. The permselective properties of these pathways suggest that they offer a watery environment through which ions diffuse in their hydrated forms. There is also suggestive evidence that, at least in some tissues, paracellular pathways provide a significant route for transepithelial water flow in response to an osmotic pressure difference; however, this has not as yet been definitively established. The effect of junctional complexes that are permeable to ions and water on the predictions of the standing-osmotic gradient model for isotonic water absorption is considered. PMID:331697
The asymptotic distribution of maxima in bivariate samples
NASA Technical Reports Server (NTRS)
Campbell, J. W.; Tsokos, C. P.
1973-01-01
The joint distribution (as n tends to infinity) of the maxima of a sample of n independent observations of a bivariate random variable (X,Y) is studied. A method is developed for deriving the asymptotic distribution of the maxima, assuming that X and Y possess asymptotic extreme-value distributions and that the probability element dF(x,y) can be expanded in a canonical series. Applied both to the bivariate normal distribution and to the bivariate gamma and compound correlated bivariate Poisson distributions, the method shows that maxima from all these distributions are asymptotically uncorrelated.
The bivariate combined model for spatial data analysis.
Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Faes, Christel
2016-08-15
To describe the spatial distribution of diseases, a number of methods have been proposed to model relative risks within areas. Most models use Bayesian hierarchical methods, in which one models both spatially structured and unstructured extra-Poisson variance present in the data. For modelling a single disease, the conditional autoregressive (CAR) convolution model has been very popular. More recently, a combined model was proposed that 'combines' ideas from the CAR convolution model and the well-known Poisson-gamma model. The combined model was shown to be a good alternative to the CAR convolution model when there was a large amount of uncorrelated extra-variance in the data. Less solutions exist for modelling two diseases simultaneously or modelling a disease in two sub-populations simultaneously. Furthermore, existing models are typically based on the CAR convolution model. In this paper, a bivariate version of the combined model is proposed in which the unstructured heterogeneity term is split up into terms that are shared and terms that are specific to the disease or subpopulation, while spatial dependency is introduced via a univariate or multivariate Markov random field. The proposed method is illustrated by analysis of disease data in Georgia (USA) and Limburg (Belgium) and in a simulation study. We conclude that the bivariate combined model constitutes an interesting model when two diseases are possibly correlated. As the choice of the preferred model differs between data sets, we suggest to use the new and existing modelling approaches together and to choose the best model via goodness-of-fit statistics. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26928309
FUNSTAT and statistical image representations
NASA Technical Reports Server (NTRS)
Parzen, E.
1983-01-01
General ideas of functional statistical inference analysis of one sample and two samples, univariate and bivariate are outlined. ONESAM program is applied to analyze the univariate probability distributions of multi-spectral image data.
Strength Development: Using Functional Isometrics in an Isotonic Strength Training Program.
ERIC Educational Resources Information Center
Jackson, Allen; And Others
1985-01-01
A study was made to determine if a combination of functional isometrics and standard isotonic training would be superior to a standard isotonic program in an instructional setting. The results provide support for functional isometrics as an enhancement where achievement of maximum strength is the goal. (Author/MT)
Statistical modeling of space shuttle environmental data
NASA Technical Reports Server (NTRS)
Tubbs, J. D.; Brewer, D. W.
1983-01-01
Statistical models which use a class of bivariate gamma distribution are examined. Topics discussed include: (1) the ratio of positively correlated gamma varieties; (2) a method to determine if unequal shape parameters are necessary in bivariate gamma distribution; (3) differential equations for modal location of a family of bivariate gamma distribution; and (4) analysis of some wind gust data using the analytical results developed for modeling application.
A precision isotonic measuring system for isolated tissues.
Mellor, P M
1984-12-01
An isotonic measuring system is described which utilizes an angular position transducer of the linear differential voltage transformer type. Resistance to corrosion, protection against the ingress of solutions, and ease of mounting and setting up were the mechanical objectives. Accuracy, linearity, and freedom from drift were essential requirements of the electrical specification. A special housing was designed to accommodate the transducer to overcome these problems. A control unit incorporating a power supply and electronic filtering components was made to serve up to four such transducers. The transducer output voltage is sufficiently high to drive directly even low sensitivity chart recorders. Constructional details and a circuit diagram are included. Fifty such transducers have been in use for up to four years in these laboratories. Examples of some of the published work done using this transducer system are referenced. PMID:6536830
Watanabe, K; Shimizu, K; Nakata, S; Watanabe, M
1991-03-01
The relationship between isotonic jaw-opening and jaw-closing muscle function was studied using a newly developed apparatus which enables load and velocity to be detected simultaneously. The following results were obtained from 17 male adults (age range 22-32 years) without any occlusal dysfunction. (i) The force-velocity relationship in jaw-opening and jaw-closing muscles was represented by a hyperbolic curve, which fitted well with Hill's equation. (ii) The theoretical maximum force obtained by extrapolation from regression was 32.55 +/- 4.98 kg for jaw opening and 35.74 +/- 4.52 kg for jaw closing. (iii) The theoretical maximum velocity obtained by extrapolation from regression was 456.70 +/- 183.27 mm s-1 for jaw opening and 372.77 +/- 141.67 mm s-1 for jaw closing. (iv) The maximum mechanical power (Pmax) calculated from the product of the force and velocity was 772.20 +/- 182.65 kg.mm s-1 for jaw opening and 708.68 +/- 128.14 kg.mm s-1 for jaw closing. (v) The Pmax exerted by individual subjects was approximately 12-34% of the maximum possible force (Fmax) calculated from the force and velocity, in both jaw opening and jaw closing. There were no statistically significant differences between jaw opening and jaw closing with regard to any isotonic muscle functions. In other words, the results of this study strongly indicated a substantial balance between isotonic jaw-opening and jaw-closing muscle function in the subjects who were investigated. PMID:2037940
Bivariate discrete beta Kernel graduation of mortality data.
Mazza, Angelo; Punzo, Antonio
2015-07-01
Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors. PMID:25084764
Copula bivariate probit models: with an application to medical expenditures.
Winkelmann, Rainer
2012-12-01
The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor (the 'treatment') on a binary health outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank copula outperforms the standard bivariate probit model. PMID:22025413
Flexible marginalized models for bivariate longitudinal ordinal data
Lee, Keunbaik; Daniels, Michael J.; Joo, Yongsung
2013-01-01
Random effects models are commonly used to analyze longitudinal categorical data. Marginalized random effects models are a class of models that permit direct estimation of marginal mean parameters and characterize serial correlation for longitudinal categorical data via random effects (Heagerty, 1999). Marginally specified logistic-normal models for longitudinal binary data. Biometrics 55, 688–698; Lee and Daniels, 2008. Marginalized models for longitudinal ordinal data with application to quality of life studies. Statistics in Medicine 27, 4359–4380). In this paper, we propose a Kronecker product (KP) covariance structure to capture the correlation between processes at a given time and the correlation within a process over time (serial correlation) for bivariate longitudinal ordinal data. For the latter, we consider a more general class of models than standard (first-order) autoregressive correlation models, by re-parameterizing the correlation matrix using partial autocorrelations (Daniels and Pourahmadi, 2009). Modeling covariance matrices via partial autocorrelations. Journal of Multivariate Analysis 100, 2352–2363). We assess the reasonableness of the KP structure with a score test. A maximum marginal likelihood estimation method is proposed utilizing a quasi-Newton algorithm with quasi-Monte Carlo integration of the random effects. We examine the effects of demographic factors on metabolic syndrome and C-reactive protein using the proposed models. PMID:23365416
Decay Properties of N = 77 odd-Z Isotones
Batchelder, J. C.; Tantawy, M. N.; Danchev, M.; Hartley, D. J.; Mazzocchi, C.; Bingham, C. R.; Grzywacz, R.; Rykaczewski, K. P.; Gross, C. J.; Shapira, D.; Yu, C.-H.; Krolas, W.; Fong, D.; Hamilton, J. H.; Li, K.; Ramayya, A. V.; Ginter, T. N.; Stolz, A.; Hagino, K.; Karny, M.
2007-11-30
The systematics of the {pi}h{sub 11/2} x {nu}h{sub 11/2} and {pi}h{sub 11/2} x {nu}s{sub 1/2} isomeric configurations were studied for the odd-Z N = 77 isotones near the proton dripline. The spin and parity of I{sup {pi}} = 8{sup +} and 5{sup -} were deduced for the isomers in {sup 140}Eu and {sup 142}Tb. No evidence for the expected 1{sup +} ground-state was found in the {sup 144}Ho decay data. The proton emission from {sup 146}Tm was restudied and the spin and parity values of I{sup {pi}} = 10{sup +} and 5{sup -} were established for {sup 146m}Tm and {sup 146gs}Tm, respectively. The interpretation of the observed decay properties and structure of the proton-emitting states was made by accounting for deformation and proton and neutron coupling to the core excitations.
Behavior of one-quasiparticle levels in odd isotonic chains of heavy nuclei
Adamian, G. G.; Antonenko, N. V.; Kuklin, S. N.; Malov, L. A.; Lu, B. N.; Zhou, S. G.
2011-08-15
The low-lying one-quasiparticle states are studied in the isotonic chains with N=147, 149, 151, 153, and 155 within the microscopic-macroscopic and self-consistent approaches. The energies of one-quasiparticle states change rather smoothly in the isotonic chains if there is no cross of the proton subshell. The {alpha}-decay schemes of several nuclei are suggested. The isomeric states in the odd isotopes of Fm and No are discussed.
Mixed-symmetry octupole and hexadecapole excitations in N=52 isotones
NASA Astrophysics Data System (ADS)
Hennig, Andreas; Spieker, Mark; Werner, Volker; Ahn, Tan; Anagnostatou, Vassia; Cooper, Nathan; Derya, Vera; Elvers, Michael; Endres, Janis; Goddard, Phil; Heinz, Andreas; Hughes, Richard O.; Ilie, Gabriela; Mineva, Milena N.; Pickstone, Simon G.; Petkov, Pavel; Pietralla, Norbert; Radeck, Desirée; Ross, Tim J.; Savran, Deniz; Zilges, Andreas
2015-05-01
In addition to the well-established quadrupole mixed-symmetry states, octupole and hexadecapole excitations with mixed-symmetry character have been recently proposed for the N = 52 isotones 92Zr and 94Mo. We performed two inelastic proton-scattering experiments to study this kind of excitations in the heaviest stable N = 52 isotone 96Ru. From the combined experimental data of both experiments absolute transition strengths were extracted.
Effect of an isotonic rehydration sports drink and exercise on urolithiasis in rats.
Abreu, N P; Bergamaschi, C T; di Marco, G S; Razvickas, C V; Schor, N
2005-04-01
The objective of the present study was to evaluate the role of physical exercise as well as the influence of hydration with an isotonic sports drink on renal function in male Wistar rats. Four groups were studied over a period of 42 days: 1) control (N = 9); 2) physical exercise (Exe, N = 7); 3) isotonic drink (Drink, N = 8); 4) physical exercise + isotonic drink (Exe + Drink, N = 8). Physical exercise consisted of running on a motor-driven treadmill for 1 h/day, at 20 m/min, 5 days a week. The isotonic sports drink was a commercial solution used by athletes for rehydration after physical activity, 2 ml administered by gavage twice a day. Urine cultures were performed in all animals. Twenty-four-hour urine samples were collected in metabolic cages at the beginning and at the end of the protocol period. Urinary and plasma parameters (sodium, potassium, urea, creatinine, calcium) did not differ among groups. However, an amorphous material was observed in the bladders of animals in the Exe + Drink and Drink groups. Characterization of the material by Western blot revealed the presence of Tamm-Horsfall protein and angiotensin converting enzyme. Physical exercise and the isotonic drink did not change the plasma or urinary parameters measured. However, the isotonic drink induced the formation of intravesical matrix, suggesting a potential lithogenic risk. PMID:15962183
A Vehicle for Bivariate Data Analysis
ERIC Educational Resources Information Center
Roscoe, Matt B.
2016-01-01
Instead of reserving the study of probability and statistics for special fourth-year high school courses, the Common Core State Standards for Mathematics (CCSSM) takes a "statistics for all" approach. The standards recommend that students in grades 6-8 learn to summarize and describe data distributions, understand probability, draw…
Bilgili, D; Ryu, D; Ergönül, Ö; Ebrahimi, N
2016-03-01
Infectious diseases that can be spread directly or indirectly from one person to another are caused by pathogenic microorganisms such as bacteria, viruses, parasites, or fungi. Infectious diseases remain one of the greatest threats to human health and the analysis of infectious disease data is among the most important application of statistics. In this article, we develop Bayesian methodology using parametric bivariate accelerated lifetime model to study dependency between the colonization and infection times for Acinetobacter baumannii bacteria which is leading cause of infection among the hospital infection agents. We also study their associations with covariates such as age, gender, apache score, antibiotics use 3 months before admission and invasive mechanical ventilation use. To account for singularity, we use Singular Bivariate Extreme Value distribution to model residuals in Bivariate Accelerated lifetime model under the fully Bayesian framework. We analyze a censored data related to the colonization and infection collected in five major hospitals in Turkey using our methodology. The data analysis done in this article is for illustration of our proposed method and can be applied to any situation that our model can be used. PMID:26394029
NASA Astrophysics Data System (ADS)
Candela, Angela; Tito Aronica, Giuseppe
2014-05-01
Floods are a global problem and are considered the most frequent natural disaster world-wide. Many studies show that the severity and frequency of floods have increased in recent years and underline the difficulty to separate the effects of natural climatic changes and human influences as land management practices, urbanization etc. Flood risk analysis and assessment is required to provide information on current or future flood hazard and risks in order to accomplish flood risk mitigation, to propose, evaluate and select measures to reduce it. Both components of risk can be mapped individually and are affected by multiple uncertainties as well as the joint estimate of flood risk. Major sources of uncertainty include statistical analysis of extremes events, definition of hydrological input, channel and floodplain topography representation, the choice of effective hydraulic roughness coefficients. The classical procedure to estimate flood discharge for a chosen probability of exceedance is to deal with a rainfall-runoff model associating to risk the same return period of original rainfall, in accordance with the iso-frequency criterion. Alternatively, a flood frequency analysis to a given record of discharge data is applied, but again the same probability is associated to flood discharges and respective risk. Moreover, since flood peaks and corresponding flood volumes are variables of the same phenomenon, they should be, directly, correlated and, consequently, multivariate statistical analyses must be applied. This study presents an innovative approach to obtain flood hazard maps where hydrological input (synthetic flood design event) to a 2D hydraulic model has been defined by generating flood peak discharges and volumes from: a) a classical univariate approach, b) a bivariate statistical analysis, through the use of copulas. The univariate approach considers flood hydrographs generation by an indirect approach (rainfall-runoff transformation using input rainfall
Sodium recirculation and isotonic transport in toad small intestine.
Nedergaard, S; Larsen, E H; Ussing, H H
1999-04-01
+ fluxes, is compatible with convective flow of the two alkali metal ions through the same population of water-filled pores. With a new set of equations, the fraction of the sodium flux passing the basement membrane barrier of the lateral space that is recirculated through the cellular compartment is estimated. This fraction was, on average, 0.72 +/- 0.03 (N = 5). It is concluded that isotonicity of the transportate can be maintained by producing a hypertonic fluid emerging from the lateral space combined with reuptake of salt via the cells. PMID:10191358
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. PMID:26190048
NASA Astrophysics Data System (ADS)
Takeuchi, Tsutomu T.
2010-08-01
We provide an analytic method to construct a bivariate distribution function (DF) with given marginal distributions and correlation coefficient. We introduce a convenient mathematical tool, called a copula, to connect two DFs with any prescribed dependence structure. If the correlation of two variables is weak (Pearson's correlation coefficient |ρ| < 1/3), the Farlie-Gumbel-Morgenstern (FGM) copula provides an intuitive and natural way to construct such a bivariate DF. When the linear correlation is stronger, the FGM copula cannot work anymore. In this case, we propose using a Gaussian copula, which connects two given marginals and is directly related to the linear correlation coefficient between two variables. Using the copulas, we construct the bivariate luminosity function (BLF) and discuss its statistical properties. We focus especially on the far-infrared-far-ulatraviolet (FUV-FIR) BLF, since these two wavelength regions are related to star-formation (SF) activity. Though both the FUV and FIR are related to SF activity, the univariate LFs have a very different functional form: the former is well described by the Schechter function whilst the latter has a much more extended power-law-like luminous end. We construct the FUV-FIR BLFs using the FGM and Gaussian copulas with different strengths of correlation, and examine their statistical properties. We then discuss some further possible applications of the BLF: the problem of a multiband flux-limited sample selection, the construction of the star-formation rate (SFR) function, and the construction of the stellar mass of galaxies (M*)-specific SFR (SFR/M*) relation. The copulas turn out to be a very useful tool to investigate all these issues, especially for including complicated selection effects.
Work capacity during 30 days of bed rest with isotonic and isokinetic exercise training
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Bernauer, E. M.; Ertl, A. C.; Trowbridge, T. S.; Wade, C. E.
1989-01-01
Results are presented from a study to determine whether or not short-term variable intensity isotonic and intermittent high-intensity isokinetic short-duration leg exercise is effective for the maintenance of peak O2 (VO2) uptake and muscular strength and endurance, respectively, during 30 days of -6 deg head-down bed rest deconditioning. The results show no significant changes in leg peak torque, leg mean total work, arm total peak torque, or arm mean total work for members of the isotonic, isokinetic, and controls groups. Changes are observed, however, in peak VO2 levels. The results suggest that near-peak variabile intensity, isotonic leg excercise maintains peak VO2 during 30 days of bed rest, while peak intermittent, isokinetic leg excercise protocol does not.
A quantum quasi-harmonic nonlinear oscillator with an isotonic term
Rañada, Manuel F.
2014-08-01
The properties of a nonlinear oscillator with an additional term k{sub g}/x², characterizing the isotonic oscillator, are studied. The nonlinearity affects to both the kinetic term and the potential and combines two nonlinearities associated to two parameters, κ and k{sub g}, in such a way that for κ = 0 all the characteristics of the standard isotonic system are recovered. The first part is devoted to the classical system and the second part to the quantum system. This is a problem of quantization of a system with position-dependent mass of the form m(x) = 1/(1 − κx²), with a κ-dependent non-polynomial rational potential and with an additional isotonic term. The Schrödinger equation is exactly solved and the (κ, k{sub g})-dependent wave functions and bound state energies are explicitly obtained for both κ < 0 and κ > 0.
Fluid and electrolyte shifts during bed rest with isometric and isotonic exercise
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Bernauer, E. M.; Young, H. L.; Morse, J. T.; Juhos, L. T.; Van Beaumont, W.; Staley, R. W.
1977-01-01
It is difficult to separate the effects of reduction in hydrostatic pressure from that of reduced energy expenditure when investigating the confinement deconditioning problem. Experiments were conducted on seven healthy young men aged 19-21 yr with the purpose of separating these two factors by using isotonic physical exercise during bed rest to provide a daily energy expenditure greater than normal ambulatory levels. Fluid and electrolyte shifts were measured during three two-week bed rest periods, each of which being separated by a three-week ambulatory recovery period. During two of the three bed rest periods they performed isometric and isotonic exercises to compare their effects on fluid and electrolyte shifts during bed rest. It is shown that during bed rest, preservation of the extracellular volume takes precedence over maintenance of the plasma volume and that this mechanism is independent of the effects of isometric or isotonic exercise.
NASA Astrophysics Data System (ADS)
Iwase, Satoshi; Kawahara, Yuko; Nishimura, Naoki; Nishimura, Rumiko; Sugenoya, Junichi; Miwa, Chihiro; Takada, Masumi
2014-08-01
To clarify the effects of isometric and isotonic exercise during mist sauna bathing on the cardiovascular function, thermoregulatory function, and metabolism, six healthy young men (22 ± 1 years old, height 173 ± 4 cm, weight 65.0 ± 5.0 kg) were exposed to a mist sauna for 10 min at a temperature of 40 °C, and relative humidity of 100 % while performing or not performing ˜30 W of isometric or isotonic exercise. The effect of the exercise was assessed by measuring tympanic temperature, heart rate, systolic and diastolic blood pressure, chest sweat rate, chest skin blood flow, and plasma catecholamine and cortisol, glucose, lactate, and free fatty acid levels. Repeated measures ANOVA showed no significant differences in blood pressure, skin blood flow, sweat rate, and total amount of sweating. Tympanic temperature increased more during isotonic exercise, and heart rate increase was more marked during isotonic exercise. The changes in lactate indicated that fatigue was not very great during isometric exercise. The glucose level indicated greater energy expenditure during isometric exercise. The free fatty acid and catecholamine levels indicated that isometric exercise did not result in very great energy expenditure and stress, respectively. The results for isotonic exercise of a decrease in lactate level and an increase in plasma free fatty acid level indicated that fatigue and energy expenditure were rather large while the perceived stress was comparatively low. We concluded that isotonic exercise may be a more desirable form of exercise during mist sauna bathing given the changes in glucose and free fatty acid levels.
Iwase, Satoshi; Kawahara, Yuko; Nishimura, Naoki; Nishimura, Rumiko; Sugenoya, Junichi; Miwa, Chihiro; Takada, Masumi
2014-08-01
To clarify the effects of isometric and isotonic exercise during mist sauna bathing on the cardiovascular function, thermoregulatory function, and metabolism, six healthy young men (22 ± 1 years old, height 173 ± 4 cm, weight 65.0 ± 5.0 kg) were exposed to a mist sauna for 10 min at a temperature of 40 °C, and relative humidity of 100 % while performing or not performing ∼30 W of isometric or isotonic exercise. The effect of the exercise was assessed by measuring tympanic temperature, heart rate, systolic and diastolic blood pressure, chest sweat rate, chest skin blood flow, and plasma catecholamine and cortisol, glucose, lactate, and free fatty acid levels. Repeated measures ANOVA showed no significant differences in blood pressure, skin blood flow, sweat rate, and total amount of sweating. Tympanic temperature increased more during isotonic exercise, and heart rate increase was more marked during isotonic exercise. The changes in lactate indicated that fatigue was not very great during isometric exercise. The glucose level indicated greater energy expenditure during isometric exercise. The free fatty acid and catecholamine levels indicated that isometric exercise did not result in very great energy expenditure and stress, respectively. The results for isotonic exercise of a decrease in lactate level and an increase in plasma free fatty acid level indicated that fatigue and energy expenditure were rather large while the perceived stress was comparatively low. We concluded that isotonic exercise may be a more desirable form of exercise during mist sauna bathing given the changes in glucose and free fatty acid levels. PMID:23884733
+Gz tolerance in man after 14-day bedrest periods with isometric and isotonic exercise conditioning
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Haines, R. F.; Sandler, H.; Bernauer, E. M.; Morse, J. T.; Armbruster, R.; Sagan, L.; Van Beaumont, W.
1975-01-01
The effects of isometric and isotonic exercise training on post-bedrest +Gz tolerance were determined. In general, 14-day bedrest resulted in a significant loss of Gz tolerance, as previously discovered. At 2.1 Gz, neither the isometric nor the isotonic exercises regimens resulted in a significant increase in post-bedrest Gz tolerance. However, following isometric exercise, restoration of about half the tolerance decrement occurred at 3.2 Gz and 3.8 Gz. Possible reasons for this partial restoration of tolerance are put forward.
Half-lives of N = 126 Isotones and the r-Process
Suzuki, Toshio; Yoshida, Takashi; Utsuno, Yutaka
2010-08-12
Beta decays of N = 126 isotones are studied by shell model calculations. Both the Gamow-Teller (GT) and first-forbidden (FF) transitions are taken into account to evaluate the half-lives of the isotones (Z = 64-72) with the use of shell model interactions based on G-matrix. The FF transitions are found to be important to reduce the half-lives by twice to several times of those obtained by the GT contributions only. Possible implications of the short half-lives of the waiting point nuclei on the r-process nucleosynthesis during the supernova explosions are discussed.
Ion secretion and isotonic transport in frog skin glands.
Ussing, H H; Lind, F; Larsen, E H
1996-07-01
The aim of this study was to clarify the mechanism of isotonic fluid transport in frog skin glands. Stationary ion secretion by the glands was studied by measuring unidirectional fluxes of 24Na+, 42K+, and carrier-free 134Cs+ in paired frog skins bathed on both sides with Ringer's solution, and with 10(-5) M noradrenaline on the inside and 10(-4) M amiloride on the outside. At transepithelial thermodynamic equilibrium conditions, the 134Cs+ flux ratio, JoutCs/JinCs, varied in seven pairs of preparations from 6 to 36. Since carrier-free 134Cs+ entering the cells is irreversibly trapped in the cellular compartment (Ussing & Lind, 1996), the transepithelial net flux of 134Cs+ indicates that a paracellular flow of water is dragging 134Cs+ in the direction from the serosal- to outside solution. From the measured flux ratios it was calculated that the force driving the secretory flux of Cs+ varied from 30 to 61 mV among preparations. In the same experiments unidirectional Na+ fluxes were measured as well, and it was found that also Na+ was subjected to secretion. The ratio of unidirectional Na+ fluxes, however, was significantly smaller than would be predicted if the two ions were both flowing along the paracellular route dragged by the flow of water. This result indicates that Na+ and Cs+ do not take the same pathway through the glands. The flux ratio of unidirectional K+ fluxes indicated active secretion of K+. The time it takes for steady-state K+ fluxes to be established was significantly longer than that of the simultaneously measured Cs+ fluxes. These results allow the conclusion that - in addition to being transported between cells - K+ is submitted to active transport along a cellular pathway. Based on the recirculation theory, we propose a new model which accounts for stationary Na+, K+, Cl- and water secretion under thermodynamic equilibrium conditions. The new features of the model, as compared to the classical Silva-model for the shark-rectal gland, are: (i
Multimodal Bivariate Thematic Maps: Auditory and Haptic Display.
ERIC Educational Resources Information Center
Jeong, Wooseob; Gluck, Myke
2002-01-01
Explores the possibility of multimodal bivariate thematic maps by utilizing auditory and haptic (sense of touch) displays. Measured completion time of tasks and the recall (retention) rate in two experiments, and findings confirmed the possibility of using auditory and haptic displays in geographic information systems (GIS). (Author/LRW)
Evaluating Univariate, Bivariate, and Multivariate Normality Using Graphical Procedures.
ERIC Educational Resources Information Center
Burdenski, Thomas K., Jr.
This paper reviews graphical and nongraphical procedures for evaluating multivariate normality by guiding the reader through univariate and bivariate procedures that are necessary, but insufficient, indications of a multivariate normal distribution. A data set using three dependent variables for two groups provided by D. George and P. Mallery…
A New Measure Of Bivariate Asymmetry And Its Evaluation
Ferreira, Flavio Henn; Kolev, Nikolai Valtchev
2008-11-06
In this paper we propose a new measure of bivariate asymmetry, based on conditional correlation coefficients. A decomposition of the Pearson correlation coefficient in terms of its conditional versions is studied and an example of application of the proposed measure is given.
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
2000-01-01
Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…
ERIC Educational Resources Information Center
Tanner, Kristine; Roy, Nelson; Merrill, Ray M.; Muntz, Faye; Houtz, Daniel R.; Sauder, Cara; Elstad, Mark; Wright-Costa, Julie
2010-01-01
Purpose: To examine the effects of nebulized isotonic saline (IS) versus sterile water (SW) on self-perceived phonatory effort (PPE) and phonation threshold pressure (PTP) following a surface laryngeal dehydration challenge in classically trained sopranos. Method: In a double-blind, within-subject crossover design, 34 sopranos breathed dry air…
The Noval Properties and Construction of Multi-scale Matrix-valued Bivariate Wavelet wraps
NASA Astrophysics Data System (ADS)
Zhang, Hai-mo
In this paper, we introduce matrix-valued multi-resolution structure and matrix-valued bivariate wavelet wraps. A constructive method of semi-orthogonal matrix-valued bivari-ate wavelet wraps is presented. Their properties have been characterized by using time-frequency analysis method, unitary extension principle and operator theory. The direct decom-position relation is obtained.
Children with Heavy Prenatal Alcohol Exposure Experience Reduced Control of Isotonic Force
Nguyen, Tanya T.; Levy, Susan S.; Riley, Edward P.; Thomas, Jennifer D.; Simmons, Roger W.
2013-01-01
Background Heavy prenatal alcohol exposure can result in diverse and extensive damage to the central nervous system, including the cerebellum, basal ganglia, and cerebral cortex. Given that these brain regions are involved in the generation and maintenance of motor force, we predicted that prenatal alcohol exposure would adversely affect this parameter of motor control. We previously reported that children with gestational alcohol exposure experience significant deficits in regulating isometric (i.e., constant) force. The purpose of the present study was to determine if these children exhibit similar deficits when producing isotonic (i.e., graded) force. Methods Children with heavy prenatal alcohol exposure and typically developing children completed a series of isotonic force contractions by exerting force on a load cell to match a criterion target force displayed on a computer monitor. Two levels of target force (5% or 20% of maximum voluntary force) were investigated in combination with varying levels of visual feedback. Results Compared to controls, children with heavy prenatal alcohol exposure generated isotonic force signals that were less accurate, more variable, and less complex in the time domain compared to control children. Specifically, interactions were found between group and visual feedback for response accuracy and signal complexity, suggesting that these children have greater difficulty altering their motor output when visual feedback is low. Conclusions These data suggest that prenatal alcohol exposure produces deficits in regulating isotonic force, which presumably result from alcohol-related damage to developing brain regions involved in motor control. These children will most likely experience difficulty performing basic motor skills and daily functional skills that require coordination of finely graded force. Therapeutic strategies designed to increase feedback and, consequently, facilitate visual-motor integration could improve isotonic force
Guilhem, Gaël; Cornu, Christophe; Guével, Arnaud
2012-01-01
Context: Resistance exercise training commonly is performed against a constant external load (isotonic) or at a constant velocity (isokinetic). Researchers comparing the effectiveness of isotonic and isokinetic resistance-training protocols need to equalize the mechanical stimulus (work and velocity) applied. Objective: To examine whether the standardization protocol could be adjusted and applied to an eccentric training program. Design: Controlled laboratory study. Setting: Controlled research laboratory. Patients or Other Participants: Twenty-one sport science male students (age = 20.6 ± 1.5 years, height = 178.0 ± 4.0 cm, mass = 74.5 ± 9.1 kg). Intervention(s): Participants performed 9 weeks of isotonic (n = 11) or isokinetic (n = 10) eccentric training of knee extensors that was designed so they would perform the same amount of angular work at the same mean angular velocity. Main Outcome Measure(s): Angular work and angular velocity. Results: The isotonic and isokinetic groups performed the same total amount of work (−185.2 ± 6.5 kJ and −184.4 ± 8.6 kJ, respectively) at the same angular velocity (21 ± 1°/s and 22°/s, respectively) with the same number of repetitions (8.0 and 8.0, respectively). Bland-Altman analysis showed that work (bias = 2.4%) and angular velocity (bias = 0.2%) were equalized over 9 weeks between the modes of training. Conclusions: The procedure developed allows angular work and velocity to be standardized over 9 weeks of isotonic and isokinetic eccentric training of the knee extensors. This method could be useful in future studies in which researchers compare neuromuscular adaptations induced by each type of training mode with respect to rehabilitating patients after musculoskeletal injury. PMID:22488276
A bivariate survival model with compound Poisson frailty
Wienke, A.; Ripatti, S.; Palmgren, J.; Yashin, A.
2015-01-01
A correlated frailty model is suggested for analysis of bivariate time-to-event data. The model is an extension of the correlated power variance function (PVF) frailty model (correlated three-parameter frailty model). It is based on a bivariate extension of the compound Poisson frailty model in univariate survival analysis. It allows for a non-susceptible fraction (of zero frailty) in the population, overcoming the common assumption in survival analysis that all individuals are susceptible to the event under study. The model contains the correlated gamma frailty model and the correlated inverse Gaussian frailty model as special cases. A maximum likelihood estimation procedure for the parameters is presented and its properties are studied in a small simulation study. This model is applied to breast cancer incidence data of Swedish twins. The proportion of women susceptible to breast cancer is estimated to be 15 per cent. PMID:19856276
Bivariate cumulative probit model for the comparison of neuronal encoding hypotheses.
Hillmann, Julia; Kneib, Thomas; Koepcke, Lena; Juárez Paz, León M; Kretzberg, Jutta
2014-01-01
Understanding the way stimulus properties are encoded in the nerve cell responses of sensory organs is one of the fundamental scientific questions in neurosciences. Different neuronal coding hypotheses can be compared by use of an inverse procedure called stimulus reconstruction. Here, based on different attributes of experimentally recorded neuronal responses, the values of certain stimulus properties are estimated by statistical classification methods. Comparison of stimulus reconstruction results then allows to draw conclusions about relative importance of covariate features. Since many stimulus properties have a natural order and can therefore be considered as ordinal, we introduce a bivariate ordinal probit model to obtain classifications for the combination of light intensity and velocity of a visual dot pattern based on different covariates extracted from recorded spike trains. For parameter estimation, we develop a Bayesian Gibbs sampler and incorporate penalized splines to model nonlinear effects. We compare the classification performance of different individual cell covariates and simple features of groups of neurons and find that the combination of at least two covariates increases the classification performance significantly. Furthermore, we obtain a non-linear effect for the first spike latency. The model is compared to a naïve Bayesian stimulus estimation method where it yields comparable misclassification rates for the given dataset. Hence, the bivariate ordinal probit model is shown to be a helpful tool for stimulus reconstruction particularly thanks to its flexibility with respect to the number of covariates as well as their scale and effect type. PMID:24186131
Effect of active pre-shortening on isometric and isotonic performance of single frog muscle fibres.
Granzier, H L; Pollack, G H
1989-01-01
1. We studied the effects of shortening history on isometric force and isotonic velocity in single intact frog fibres. Fibres were isometrically tetanized. When force reached a plateau, shortening was imposed, after which the fibre was held isometric again. Isometric force after shortening could then be compared with controls in which no shortening had taken place. 2. Sarcomere length was measured simultaneously with two independent methods: a laser-diffraction method and a segment-length method that detects the distance between two markers attached to the surface of the fibre, about 800 microns apart. 3. The fibre was mounted between two servomotors. One was used to impose the load clamp while the other cancelled the translation that occurred during this load clamp. Thus, translation of the segment under investigation could be minimized. 4. Initial experiments were performed at the fibre level. We found that active preshortening reduced isometric force considerably, thereby confirming earlier work of others. Force reductions as large as 70% were observed. 5. Under conditions in which there were large effects of shortening at the fibre level, we measured sarcomere length changes in the central region of the fibre. These sarcomeres shortened much less than the fibre's average. In fact, when the load was high, these sarcomeres lengthened while the fibre as a whole shortened. Thus, while the fibre-length signal implied that sarcomeres might have shortened to some intermediate length, in reality some sarcomeres were much longer, others much shorter. 6. Experiments performed at the sarcomere level revealed that isometric force was unaffected by previous sarcomere shortening provided the shortening occurred against either a low load or over a short distance. However, if the work done during shortening was high, force after previous shortening was less than if sarcomeres had remained at the final length throughout contraction. The correlation between the force deficit and
STATISTICAL METHODOLOGY FOR EXPLORING ELEVATIONAL DIFFERENCES IN PRECIPITATION CHEMISTRY
A statistical methodology for exploring the relationships between elevation and precipitation chemistry is outlined and illustrated. he methodology utilizes maximum likelihood estimates and likelihood ratio tests with contour ellipses of assumed bivariate lognormal distributions ...
Effects of isotonic fluid load on plasma water and extracellular fluid volumes in the rat.
Larsson, M; Ware, J
1983-01-01
An isotonic fluid load was given to rats by infusing 12 ml saline i.v. in 60 min. The plasma water and extracellular fluid volumes of the whole animal and selected tissues were subsequently studied with 125I human serum albumin and 51Cr EDTA. The fluid infused was equivalent to 130% of the plasma water volume. The total extracellular fluid volume increased by 17%, while the total plasma water measured with RIHSA remained unchanged. The regional extracellular fluid volumes increased in the lung (14%), the gastric fundus (15%), large intestine (21%) and skin (28%). The results illustrate the selective distribution of an isotonic fluid overload, those tissues being effected having high compliances. PMID:6617709
Analysis of the Bivariate Parameter Wind Differences Between Jimsphere and Windsonde
NASA Technical Reports Server (NTRS)
Susko, Michael
1987-01-01
An analysis is presented for the bivariate parameter differences between the FPS-16 Radar/Jimsphere and the Meteorological Sounding System (MSS) Windsonde. The Jimsphere is used as the standard to measure the ascent wind during the Space Shuttle launches at Kennedy Space Center, Florida, and the Windsonde is the backup system. In addition, a discussion of the terrestrial environment (below 20 km) and a description of the Jimsphere and Windsonde wind sensors are given. Computation of the wind statistics from 64 paired Jimsphere and Windsonde balloon releases in support of 14 Space Shuttle launches shows a good agreement between the two wind sensors. From the analysis of buildup and back-off data for various scales of distance and the comparison of the cumulative percent frequency (CPF) versus wind speed change, it is shown that the wind speed change for various scales of distances for the Jimsphere and Windsonde compare favorably.
Internal dose assessment for 211At α-emitter in isotonic solution as radiopharmaceutical
NASA Astrophysics Data System (ADS)
Yuminov, O. A.; Fotina, O. V.; Priselkova, A. B.; Tultaev, A. V.; Platonov, S. Yu.; Eremenko, D. O.; Drozdov, V. A.
2003-12-01
The functional fitness of the α-emitter 211At for radiotherapy of the thyroid gland cancer is evaluated. Radiation doses are calculated using the MIRD method and previously obtained pharmacokinetic data for 211At in isotonic solution and for 123I as sodium iodide. Analysis of the 211At radiation dose to the thyroid gland suggests that this radiopharmaceutical may be predominantly used for the treatment of the thyroid cancer.
van Lunteren, Erik; Moyer, Michelle
2012-01-01
K(+) channel blockers like 3,4-diaminopyridine (DAP) can double isometric muscle force. Functional movements require more complex concentric and eccentric contractions, however the effects of K(+) channel blockade on these types of contractions in situ are unknown. Extensor digitorum longus (EDL) muscles were stimulated in situ with and without DAP in anesthetized rats and fatigability was addressed using a series of either concentric or eccentric contractions. During isotonic protocols (5-100% load), DAP significantly shifted shortening- and maximum shortening velocity-load curves upward and to the right and increased power and work. Maximum shortening, maximum shortening velocity, and power doubled while work increased by ∼250% during isotonic contraction at 50% load. During isotonic fatigue, DAP significantly augmented maximum shortening, work, shortening velocity, and power. During constant velocity eccentric protocols (2-12 mm/s), DAP increased muscle force during eccentric contractions at 6, 8, 10, and 12 mm/s. During eccentric contraction at a constant velocity of 6 mm/s while varying the stimulation frequency, DAP significantly increased muscle force during 20, 40, and 70 Hz. The effects of DAP on muscle contractile performance during eccentric fatigue varied with level of fatigue. DAP-induced contractile increases during isotonic contractions were similar to those produced during previously studied isometric contractions, while the DAP effect during eccentric contractions was more modest. These findings are especially important in attempting to optimize functional electrical stimulation parameters for spinal cord injury patients while also preventing rapid fatigue of those muscles. PMID:23060809
Koutras, Georgios; Letsi, Magdalini; Papadopoulos, Pericles; Gigis, Ioannis
2012-01-01
Background: Although both isotonic and isokinetic exercises are commonly used in the rehabilitation of patients after arthroscopic meniscectomy no studies have compared their effect on strength recovery and functional outcomes. Purpose: The purpose of this study was to investigate the effects of two rehabilitation programs (isotonic and isokinetic) on muscle strength and functional performance after partial knee meniscectomy. A secondary purpose was to assess the correlation between isokinetic strength deficits and hop test performance deficits. Methods: Twenty male patients who underwent arthroscopic partial meniscectomy volunteered for the study. Both isotonic and isokinetic training were performed with the same equipment thereby blinding subjects to the mode of exercise. Main outcome measures were collected on the 14th and 33rd postoperative days and included isokinetic strength of the knee extensors and flexors, functional performance (single, triple, and vertical hopping) and the Lysholm questionnaire. Multivariate and univariate analyses of variance were used to assess the effects of the independent variables on the isokinetic variables, functional tests, and Lysholm score. Pearson's correlation was used to assess the relationship between isokinetic strength deficits and functional performance deficits. Results: Isokinetic measures, functional tests, and the Lysholm score all increased between initial and final assessment (p≤0.003). However, there were no group or group*time effects on any of the outcome variables (p≥0.33). Functional tests were better predictors of isokinetic deficits in the 14th compared to the 33rd postoperative day. Conclusion: No differences were found in the outcomes of patients treated using an isokinetic and an isotonic protocol for rehabilitation after arthroscopic meniscectomy. More than half of patients did not meet the 90% criterion in the hop tests for safe return to sports five weeks after meniscectomy. There were
Microscopic investigation of structural evolution in even-even N = 60 isotones
Oudih, M. R.; Fellah, M.; Allal, N. H.; Benhamouda, N.
2012-10-20
The ground state properties of even-even N=60 isotones from the neutron-rich to the proton-rich side are investigated within the self-consistent Skyrme-Hartree-Fock-Bogoliubov theory in the triaxial landscape. Quantities such as binding energies and root-mean-square radii are investigated and compared with available experimental data. The evolution of the potential energy surfaces in the ({beta},{gamma}) deformation plane is presented and discussed.
Isotonic contraction as a result of cooperation of sarcomeres--a model and simulation outcome.
Wünsch, Z
1996-01-01
The molecular level of the functional structure of the contractile apparatus of cross-striated muscle has been mapped out almost minutely. Most authors accept the basic principles of the theory of sliding filaments and the theory of operation of molecular generators of force which, of course, are progressively updated by integrating new knowledge. The idea of the model delineated below does not contradict these theories, for it refers to another level of the system's hierarchy. The definition of the system, hereafter referred to Ideal Sarcomere (IS), takes into account the fact that, during isotonic contraction, a large number of not wholly independently working sarcomeres and molecular generators of force is active in a synergistic way. The shortening velocity of isotonically contracting IS is determined by the relation between quantities conveying different tasks of active generators of force and the influence of the system parameters. Although IS is derived from simple axiomatic predicates, it has properties which were not premediated in defining the system and which, in spite of this, correspond to some properties of the biological original. The equations of the system allow us to calculate the shortening velocity of 'isotonic contraction' and other variables and parameters and show, inter alia, an alternative way to derive and interpret the relations stated in Hill's force-velocity equation. The simulation results indicate that the macroscopic manifestations of isotonic contraction may be also contingent on the properties of the cooperating system of the multitude of sarcomeres, which also constitutes one part of the functional structure of muscle. PMID:8924648
van Lunteren, Erik; Pollarine, Jennifer; Moyer, Michelle
2007-07-01
The hallmark of genetic CLC-1 chloride channel deficiency in myotonic humans, goats and mice is delayed muscle relaxation resulting from persistent electrical discharges. In addition to the ion channel defect, muscles from myotonic humans and mice also have major changes in fibre type and myosin isoform composition, but the extent to which this affects isometric contractions remains controversial. Many muscles, including the diaphragm, shorten considerably during normal activities, but shortening contractions have never been assessed in myotonic muscle. The present study tested the hypothesis that CLC-1 deficiency leads to an impairment of muscle isotonic contractile performance. This was tested in vitro on diaphragm muscle from SWR/J-Clcn1(adr-mto)/J myotonic mice. The CLC-1-deficient muscle demonstrated delayed relaxation, as expected. During the contractile phase, there were significant reductions in power and work across a number of stimulation frequencies and loads in CLC-1-deficient compared with normal muscle, the magnitude of which in many instances exceeded 50%. Reductions in shortening and velocity of shortening occurred, and were more pronounced when calculated as a function of absolute than relative load. However, the maximal unloaded shortening velocity calculated from Hill's equation was not altered significantly. The impaired isotonic contractile performance of CLC-1-deficient muscle persisted during fatigue-inducing stimulation. These data indicate that genetic CLC-1 chloride channel deficiency in mice not only produces myotonia but also substantially worsens the isotonic contractile performance of diaphragm muscle. PMID:17483199
Isotonicity of liver and of kidney tissue in solutions of electrolytes.
OPIE, E L
1959-07-01
Solutions of a wide variety of electrolytes, isotonic with liver or with kidney tissue, have approximately the same osmotic pressure as solutions of sodium chloride isotonic with tissues of the two organs respectively; that is, with solutions approximately twice as concentrated as the sodium chloride of mammalian blood plasma. The molar concentration of various electrolytes isotonic with liver or with kidney tissue immediately after its removal from the body is determined by the molecular weight, valency, and ion-dissociation of these electrolytes in accordance with the well known conditions of osmosis. The plasma membranes of liver and of kidney cells are imperfectly semipermeable to electrolytes, and those that enter the cell, though retarded in so doing, bring about injury which increases permeability to water. The osmotic activity of cells of mammalian liver and kidney immediately after their removal from the body resembles that of plant cells, egg cells of marine invertebrates, and mammalian red blood corpuscles and presumably represents a basic property of living cells by which osmotic pressure may be adjusted to functional need. PMID:13664872
ISOTONICITY OF LIVER AND OF KIDNEY TISSUE IN SOLUTIONS OF ELECTROLYTES
Opie, Eugene L.
1959-01-01
Solutions of a wide variety of electrolytes, isotonic with liver or with kidney tissue, have approximately the same osmotic pressure as solutions of sodium chloride isotonic with tissues of the two organs respectively; that is, with solutions approximately twice as concentrated as the sodium chloride of mammalian blood plasma. The molar concentration of various electrolytes isotonic with liver or with kidney tissue immediately after its removal from the body is determined by the molecular weight, valency, and ion-dissociation of these electrolytes in accordance with the well known conditions of osmosis. The plasma membranes of liver and of kidney cells are imperfectly semipermeable to electrolytes, and those that enter the cell, though retarded in so doing, bring about injury which increases permeability to water. The osmotic activity of cells of mammalian liver and kidney immediately after their removal from the body resembles that of plant cells, egg cells of marine invertebrates, and mammalian red blood corpuscles and presumably represents a basic property of living cells by which osmotic pressure may be adjusted to functional need. PMID:13664872
Computational approach to Thornley's problem by bivariate operational calculus
NASA Astrophysics Data System (ADS)
Bazhlekova, E.; Dimovski, I.
2012-10-01
Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.
A new approach to epithelial isotonic fluid transport: an osmosensor feedback model.
Hill, A E; Shachar-Hill, B
2006-03-01
A model for control of the transport rate and osmolarity of epithelial fluid (isotonic transport) is presented by using an analogy with the control of temperature and flow rate in a shower. The model brings recent findings and theory concerning the role of aquaporins in epithelia together with measurements of epithelial paracellular flow into a single scheme. It is not based upon osmotic equilibration across the epithelium but rather on the function of aquaporins as osmotic sensors that control the tonicity of the transported fluid by mixing cellular and paracellular flows, which may be regarded individually as hyper- and hypo-tonic fluids, to achieve near-isotonicity. The system is built on a simple feedback loop and the quasi-isotonic behavior is robust to the precise values of most parameters. Although the two flows are separate, the overall fluid transport rate is governed by the rate of salt pumping through the cell. The model explains many things: how cell pumping and paracellular flow can be coupled via control of the tight junctions; how osmolarity is controlled without depending upon the precise magnitude of membrane osmotic permeability; and why many epithelia have different aquaporins at the two membranes. The model reproduces all the salient features of epithelial fluid transport seen over many years but also indicates novel behavior that may provide a subject for future research and serve to distinguish it from other schemes such as simple osmotic equilibration. Isotonic transport is freed from constraints due to limited permeability of the membranes and the precise geometry of the system. It achieves near-isotonicity in epithelia in which partial water transport by co-transporters may be present, and shows apparent electro-osmotic effects. The model has been developed with a minimum of parameters, some of which require measurement, but the model is flexible enough for the basic idea to be extended both to complex systems of water and salt transport
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2009-01-01
Examined are single- and bi-variate geomagnetic precursors for predicting the maximum amplitude (RM) of a sunspot cycle several years in advance. The best single-variate fit is one based on the average of the ap index 36 mo prior to cycle minimum occurrence (E(Rm)), having a coefficient of correlation (r) equal to 0.97 and a standard error of estimate (se) equal to 9.3. Presuming cycle 24 not to be a statistical outlier and its minimum in March 2008, the fit suggests cycle 24 s RM to be about 69 +/- 20 (the 90% prediction interval). The weighted mean prediction of 11 statistically important single-variate fits is 116 +/- 34. The best bi-variate fit is one based on the maximum and minimum values of the 12-mma of the ap index; i.e., APM# and APm*, where # means the value post-E(RM) for the preceding cycle and * means the value in the vicinity of cycle minimum, having r = 0.98 and se = 8.2. It predicts cycle 24 s RM to be about 92 +/- 27. The weighted mean prediction of 22 statistically important bi-variate fits is 112 32. Thus, cycle 24's RM is expected to lie somewhere within the range of about 82 to 144. Also examined are the late-cycle 23 behaviors of geomagnetic indices and solar wind velocity in comparison to the mean behaviors of cycles 2023 and the geomagnetic indices of cycle 14 (RM = 64.2), the weakest sunspot cycle of the modern era.
Bivariate generalized Pareto distribution for extreme atmospheric particulate matter
NASA Astrophysics Data System (ADS)
Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma; Aris, Ahmad Zaharin
2015-02-01
The high particulate matter (PM10) level is the prominent issue causing various impacts to human health and seriously affecting the economics. The asymptotic theory of extreme value is apply for analyzing the relation of extreme PM10 data from two nearby air quality monitoring stations. The series of daily maxima PM10 for Johor Bahru and Pasir Gudang stations are consider for year 2001 to 2010 databases. The 85% and 95% marginal quantile apply to determine the threshold values and hence construct the series of exceedances over the chosen threshold. The logistic, asymmetric logistic, negative logistic and asymmetric negative logistic models areconsidered as the dependence function to the joint distribution of a bivariate observation. Maximum likelihood estimation is employed for parameter estimations. The best fitted model is chosen based on the Akaike Information Criterion and the quantile plots. It is found that the asymmetric logistic model gives the best fitted model for bivariate extreme PM10 data and shows the weak dependence between two stations.
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models
Liu, Ziyue; Cappola, Anne R.; Crofford, Leslie J.; Guo, Wensheng
2013-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls. PMID:24729646
A Bayesian semiparametric model for bivariate sparse longitudinal data.
Das, Kiranmoy; Li, Runze; Sengupta, Subhajit; Wu, Rongling
2013-09-30
Mixed-effects models have recently become popular for analyzing sparse longitudinal data that arise naturally in biological, agricultural and biomedical studies. Traditional approaches assume independent residuals over time and explain the longitudinal dependence by random effects. However, when bivariate or multivariate traits are measured longitudinally, this fundamental assumption is likely to be violated because of intertrait dependence over time. We provide a more general framework where the dependence of the observations from the same subject over time is not assumed to be explained completely by the random effects of the model. We propose a novel, mixed model-based approach and estimate the error-covariance structure nonparametrically under a generalized linear model framework. We use penalized splines to model the general effect of time, and we consider a Dirichlet process mixture of normal prior for the random-effects distribution. We analyze blood pressure data from the Framingham Heart Study where body mass index, gender and time are treated as covariates. We compare our method with traditional methods including parametric modeling of the random effects and independent residual errors over time. We conduct extensive simulation studies to investigate the practical usefulness of the proposed method. The current approach is very helpful in analyzing bivariate irregular longitudinal traits. PMID:23553747
Household Poverty Dynamics in Malawi: A Bivariate Probit Analysis
NASA Astrophysics Data System (ADS)
Kenala Bokosi, Fanwell
The aim of this study is to identify the sources of expenditure and poverty dynamics among Malawian households between 1998 and 2002 and to model poverty transitions in Malawi using a bivariate probit model with endogenous selection to address the initial conditions' problem. The exogeneity of the initial state is strongly rejected and could result in considerable overstatement of the effects of the explanatory factors. The results of the bivariate probit model do indicate that education of the household head, per capita acreage cultivated and changes in household size are significantly related to the probability of being poor in 2002 irrespective of the poverty status in 1998. For those households who were poor in 1998, the probability of being poor in 2002 was significantly influenced by household size, value of livestock owned and mean time to services, while residence in the Northern region was a significant variable in determining the probability of being poor in 2002 for households that were not poor in 1998.
Higuchi, H; Goldman, Y E
1995-01-01
We measured isotonic sliding distance of single skinned fibers from rabbit psoas muscle when known and limited amounts of ATP were made available to the contractile apparatus. The fibers were immersed in paraffin oil at 20 degrees C, and laser pulse photolysis of caged ATP within the fiber initiated the contraction. The amount of ATP released was measured by photolyzing 3H-ATP within fibers, separating the reaction products by high-pressure liquid chromatography, and then counting the effluent peaks by liquid scintillation. The fiber stiffness was monitored to estimate the proportion of thick and thin filament sites interacting during filament sliding. The interaction distance, Di, defined as the sliding distance while a myosin head interacts with actin in the thin filament per ATP molecule hydrolyzed, was estimated from the shortening distance, the number of ATP molecules hydrolyzed by the myosin heads, and the stiffness. Di increased from 11 to 60 nm as the isotonic tension was reduced from 80% to 6% of the isometric tension. Velocity and Di increased with the concentration of ATP available. As isotonic load was increased, the interaction distance decreased linearly with decrease of the shortening velocity and extrapolated to 8 nm at zero velocity. Extrapolation of the relationship between Di and velocity to saturating ATP concentration suggests that Di reaches 100-190 nm at high shortening velocity. The interaction distance corresponds to the sliding distance while cross-bridges are producing positive (working) force plus the distance while they are dragging (producing negative forces). The results indicate that the working and drag distances increase as the velocity increases. Because Di is larger than the size of either the myosin head or the actin monomer, the results suggest that for each ATPase cycle, a myosin head interacts mechanically with several actin monomers either while working or while producing drag. PMID:8534820
The lateral intercellular space as osmotic coupling compartment in isotonic transport.
Larsen, E H; Willumsen, N J; Møbjerg, N; Sørensen, J N
2009-01-01
Solute-coupled water transport and isotonic transport are basic functions of low- and high-resistance epithelia. These functions are studied with the epithelium bathed on the two sides with physiological saline of similar composition. Hence, at transepithelial equilibrium water enters the epithelial cells from both sides, and with the reflection coefficient of tight junction being larger than that of the interspace basement membrane, all of the water leaves the epithelium through the interspace basement membrane. The common design of transporting epithelia leads to the theory that an osmotic coupling of water absorption to ion flow is energized by lateral Na(+)/K(+) pumps. We show that the theory accounts quantitatively for steady- and time dependent states of solute-coupled fluid uptake by toad skin epithelium. Our experimental results exclude definitively three alternative theories of epithelial solute-water coupling: stoichiometric coupling at the molecular level by transport proteins like SGLT1, electro-osmosis and a 'junctional fluid transfer mechanism'. Convection-diffusion out of the lateral space constitutes the fundamental problem of isotonic transport by making the emerging fluid hypertonic relative to the fluid in the lateral intercellular space. In the Na(+) recirculation theory the 'surplus of solutes' is returned to the lateral space via the cells energized by the lateral Na(+)/K(+) pumps. We show that this theory accounts quantitatively for isotonic and hypotonic transport at transepithelial osmotic equilibrium as observed in toad skin epithelium in vitro. Our conclusions are further developed for discussing their application to solute-solvent coupling in other vertebrate epithelia such as small intestine, proximal tubule of glomerular kidney and gallbladder. Evidence is discussed that the Na(+) recirculation theory is not irreconcilable with the wide range of metabolic cost of Na(+) transport observed in fluid-transporting epithelia. PMID:18983444
Spectrum-based estimators of the bivariate Hurst exponent.
Kristoufek, Ladislav
2014-12-01
We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X-Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators. PMID:25615143
Spectrum-based estimators of the bivariate Hurst exponent
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav
2014-12-01
We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X -Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.
Uemura, Tetsuji; Matsumoto, Naozumi; Tanabe, Tsuyoshi; Saitoh, Tomoichi; Matsushita, Shigeto; Mitsukawa, Nobuyuki
2005-05-01
The following report describes the combination of surgical correction with intraoperative distention using isotonic saline injection and the rotation flap method for correction of cryptotia. This technique provided extensive skin coverage of the upper portion of the auricle and was an easy and quick method of dissecting the cartilage of the posterior auricle. The main advantages of this technique include achievement of skin expansion without the need for expander material, simple design of the skin incision, and easy dissection of the cartilage. Although one patient experienced partial congestion in the upper tip of the rotation flap, no other complications occurred. Further, cryptotia did not recur. PMID:15915119
Projected quasiparticle calculations for the N =82 odd-proton isotones
Losano, L. ); Dias, H. )
1991-12-01
The structure of low-lying states in odd-mass {ital N}=82 isotones (135{le}{ital A}{le}145) is investigated in terms of a number-projected one- and three-quasiparticles Tamm-Dancoff approximation. A surface-delta interaction is taken as the residual nucleon-nucleon interaction. Excitation energies, dipole and quadrupole moments, and {ital B}({ital M}1) and {ital B}({ital E}2) values are calculated and compared with the experimental data.
Bonn potential and shell-model calculations for N=126 isotones
Coraggio, L.; Covello, A.; Gargano, A.; Itaco, N.; Kuo, T. T. S.
1999-12-01
We have performed shell-model calculations for the N=126 isotones {sup 210}Po, {sup 211}At, and {sup 212}Rn using a realistic effective interaction derived from the Bonn-A nucleon-nucleon potential by means of a G-matrix folded-diagram method. The calculated binding energies, energy spectra, and electromagnetic properties show remarkably good agreement with the experimental data. The results of this paper complement those of our previous study on neutron hole Pb isotopes, confirming that realistic effective interactions are now able to reproduce with quantitative accuracy the spectroscopic properties of complex nuclei. (c) 1999 The American Physical Society.
{alpha}-decay studies of the exotic N=125, 126, and 127 isotones
Xu Chang; Ren Zhongzhou
2007-08-15
The {alpha}-decay half-lives of the exotic N=125, 126, and 127 isotones (Po, Rn, Ra, Th, and U) are systematically studied by the density-dependent cluster model (DDCM). The influence of the neutron shell closure N=126 on the {alpha}-cluster formation and penetration probabilities is analyzed and discussed in detail. By combining the DDCM and a two-level microscopic model together, the experimental half-lives of {alpha} transitions to both the ground state and the excited state in the daughter nuclei are reproduced very well.
Simons, Samantha; Abasolo, Daniel; Sauseng, Paul
2015-08-01
The spurious increase in coherence of electroencephalogram (EEG) signals between distant electrode points has long been understood to be due to volume conduction of the EEG signal. Reducing the volume conduction components of EEG recordings in pre-processing attenuates this. However, the effect of volume conduction on non-linear signal processing of EEG signals is yet to be fully described. This pilot study aimed to investigate the impact of volume conduction on results calculated with a distance based, bivariate form of Lempel-Ziv Complexity (dLZC) by analyzing EEG signals from Alzheimer's disease (AD) patients and healthy age-matched controls with and without pre-processing with Current Source Density (CSD) transformation. Spurious statistically significant differences between AD patients and control's EEG signals seen without CSD pre-processing were not seen with CSD volume conduction mitigation. There was, however, overlap in the region of electrodes which were seen to hold this statistically significant information. These results show that, while previously published findings are still valid, volume conduction mitigation is required to ensure non-linear signal processing methods identify changes in signals only due to the purely local signal alone. PMID:26738005
Search for the Skyrme-Hartree-Fock solutions for chiral rotation in N=75 isotones
Olbratowski, P.; Dobaczewski, J.; Dudek, J.
2006-05-15
A search for self-consistent solutions for the chiral rotational bands in the N=75 isotones {sup 130}Cs, {sup 132}La, {sup 134}Pr, and {sup 136}Pm is performed within the Skyrme-Hartree-Fock cranking approach using SKM* and SLy4 parametrizations. The dependence of the solutions on the time-odd contributions in the energy functional is studied. From among the four isotones considered, self-consistent chiral solutions are obtained only in {sup 132}La. The microscopic calculations are compared with the {sup 132}La experimental data and with results of a classical model that contains all the mechanisms underlying the chirality of the collective rotational motion. Strong similarities between the Hartree-Fock and classical model results are found. The suggestion formulated earlier by the authors that the chiral rotation cannot exist below a certain critical frequency is further illustrated and discussed, together with the microscopic origin of a transition from planar to chiral rotation in nuclei. We also formulate the separability rule by which the tilted-axis-cranking solutions can be inferred from three independent principal-axis-cranking solutions corresponding to three different axes of rotation.
Cotter, T P; Gebruers, E M; Hall, W J; O'Sullivan, M F
1986-01-01
In a group of healthy humans, plasma vasopressin (AVP) levels fell on drinking either Tyrode or mannitol solutions isosmotic with plasma. Both the timing and magnitude of the fall were appropriate to account for the transient diuresis which followed the drinking. Although plasma expansion follows drinking Tyrode solution it occurred too late to account for the fall in plasma AVP. It was also too small to inhibit AVP secretion. Even though plasma volume tended to contract on drinking isosmotic mannitol solution a fall in plasma AVP and a diuresis occurred, similar to those found after drinking Tyrode solution. These findings appear to eliminate plasma volume expansion as the stimulus for the fall in plasma AVP and the associated diuresis on drinking isotonic fluids. In a further group of human subjects, bypassing the oropharynx by intragastric infusion resulted in a slower onset of diuresis after a water load. We suggest that receptors, as yet undefined, in the upper gastrointestinal tract contribute to the early stages of a water diuresis and account for the apparently inappropriate transient diuresis which follows the drinking of isotonic fluids. PMID:3098967
Insulin and glucose responses during bed rest with isotonic and isometric exercise
NASA Technical Reports Server (NTRS)
Dolkas, C. B.; Greenleaf, J. E.
1977-01-01
The effects of daily intensive isotonic (68% maximum oxygen uptake) and isometric (21% maximum extension force) leg exercise on plasma insulin and glucose responses to an oral glucose tolerance test (OGTT) during 14-day bed-rest (BR) periods were investigated in seven young healthy men. The OGTT was given during ambulatory control and on day 10 of the no-exercise, isotonic, and isometric exercise BR periods during the 15-wk study. The subjects were placed on a controlled diet starting 10 days before each BR period. During BR, basal plasma glucose concentration remained unchanged with no exercise, but increased (P less 0.05) to 87-89 mg/100 ml with both exercise regimens on day 2, and then fell slightly below control levels on day 13. The fall in glucose content during BR was independent of the exercise regimen and was an adjustment for the loss of plasma volume. The intensity of the responses of insulin and glucose to the OGTT was inversely proportional to the total daily energy expenditure during BR. It was estimated that at least 1020 kcal/day must be provided by supplemental exercise to restore the hyperinsulinemia to control levels.
Heat-induced changes in the mechanics of a collagenous tissue: isothermal, isotonic shrinkage.
Chen, S S; Wright, N T; Humphrey, J D
1998-06-01
We present data from isothermal, isotonic-shrinkage tests wherein bovine chordae tendineae were subjected to well-defined constant temperatures (from 65 to 90 degrees C), durations of heating (from 180 to 3600 s), and isotonic uniaxial stresses during heating (from 100 to 650 kPa). Tissue response during heating and "recovery" at 37 degrees C following heating was evaluated in terms of the axial shrinkage, a gross indicator of underlying heat-induced denaturation. There were three key findings. First, scaling the heating time via temperature and load-dependent characteristic times for the denaturation process collapsed all shrinkage data to a single curve, and thereby revealed a time-temperature-load equivalency. Second, the characteristic times exhibited an Arrhenius-type behavior with temperature wherein the slopes were nearly independent of applied load--this suggested that applied loads during heating affect the activation entropy, not energy. Third, all specimens exhibited a time-dependent, partial recovery when returned to 37 degrees C following heating, but the degree of recovery decreased with increases in the load imposed during heating. These new findings on heat-induced changes in tissue behavior will aid in the design of improved clinical heating protocols and provide guidance for the requisite constitutive formulations. PMID:10412406
Fragmentation and systematics of the pygmy dipole resonance in the stable N=82 isotones
Savran, D.; Loeher, B.; Elvers, M.; Endres, J.; Zilges, A.; Fritzsche, M.; Pietralla, N.; Ponomarev, V. Yu.; Romig, C.; Schnorrenberger, L.; Sonnabend, K.
2011-08-15
The low-lying electric dipole (E1) strength in the semimagic nucleus {sup 136}Xe has been measured, which finalizes the systematic survey to investigate the so-called pygmy dipole resonance (PDR) in all stable even N=82 isotones with the method of nuclear resonance fluorescence using real photons in the entrance channel. In all cases, a fragmented resonance-like structure of E1 strength is observed in the energy region 5-8 MeV. An analysis of the fragmentation of the strength reveals that the degree of fragmentation decreases toward the proton-deficient isotones, while the total integrated strength increases, indicating a dependence of the total strength on the neutron-to-proton ratio. The experimental results are compared to microscopic calculations within the quasiparticle phonon model. The calculation includes complex configurations of up to three phonons and is able to reproduce also the fragmentation of the E1 strength, which allows us to draw conclusions on the damping of the PDR. Calculations and experimental data are in good agreement on the degree of fragmentation and also on the integrated strength if the sensitivity limit of the experiments is taken into account.
Role of the paracellular pathway in isotonic fluid movement across the renal tubule.
Boulpaep, E L; Sackin, H
1977-01-01
Evidence for a highly permeable paracellular shunt in the proximal tubule is reviewed. The paracellular pathway is described as a crucial site for the regulation of net absorption and for solute-solvent interaction. Available models for the coupling of salt and water transport are assessed with respect to the problem of isotonic water movement. Two new models are proposed taking into account that the tight junctions are permeable to salt and water and that active transport sites for sodium are distributed uniformly along the lateral cell membrane. The first model (continuous model) is a modification of Diamond and Bossert's proposal using different assumptions and boundary conditions. No appreciable standing gradients are predicted by this model. The second model (compartmental model) is an expansion of Curran's double membrane model by including additional compartments and driving forces. Both models predict a reabsorbate which is not isosmotic. For the particular case of the proximal tubule it is shown that in the presence of a leaky epithelium these deviations from isotonicity might have escaped experimental observation. PMID:331692
Different landslide sampling strategies in a grid-based bi-variate statistical susceptibility model
NASA Astrophysics Data System (ADS)
Hussin, Haydar Y.; Zumpano, Veronica; Reichenbach, Paola; Sterlacchini, Simone; Micu, Mihai; van Westen, Cees; Bălteanu, Dan
2016-01-01
This study had three aims. The first was to assess the performance of the weights-of-evidence (WofE) landslide susceptibility model in areas that are very different in terms of size, geoenvironmental settings, and landslide types. The second was to test the appropriate strategies to sample the mapped landslide polygon. The final aim was to evaluate the performance of the method to changes in the landslide sample size used to train the model. The method was applied to two areas: the Fella River basin (eastern Italian Alps) containing debris flows, and Buzau County (Romanian Carpathians) with shallow landslides. The three landslide sampling strategies used were: (1) the landslide scarp centroid, (2) points populating the scarp on a 50-m grid, and (3) the entire scarp polygon. The highest success rates were obtained when sampling shallow landslides as 50-m grid-points and debris flow scarps as polygons. Prediction rates were highest when using the entire scarp polygon method for both landslide types. The sample size test using the landslide centroids showed that a sample of 104 debris flow scarps was sufficient to predict the remaining 941 debris flows in the Fella River basin, while 161 shallow landslides was the minimum required number to predict the remaining 1451 scarps in Buzau County. Below these landslide sample thresholds, model performance was too low. However, using more landslides than the threshold produced a plateau effect with little to no increase in the model performance rates.
NASA Astrophysics Data System (ADS)
Araújo, J. P. C.; DA Silva, L. M.; Dourado, F. A. D.; Fernandes, N.
2015-12-01
Landslides are the most damaging natural hazard in the mountainous region of Rio de Janeiro State in Brazil, responsible for thousands of deaths and important financial and environmental losses. However, this region has currently few landslide susceptibility maps implemented on an adequate scale. Identification of landslide susceptibility areas is fundamental in successful land use planning and management practices to reduce risk. This paper applied the Bayes' theorem based on weight of evidence (WoE) using 8 landslide-related factors in a geographic information system (GIS) for landslide susceptibility mapping. 378 landslide locations were identified and mapped on a selected basin in the city of Nova Friburgo, triggered by the January 2011 rainfall event. The landslide scars were divided into two subsets: training and validation subsets. The 8 landslide-related factors weighted by WoE were performed using chi-square test to indicate which variables are conditionally independent of each other to be used in the final map. Finally, the maps of weighted factors were summed up to construct the landslide susceptibility map and validated by the validation landslide subset. According to the results, slope, aspect and contribution area showed the higher positive spatial correlation with landslides. In the landslide susceptibility map, 21% of the area presented very low and low susceptibilities with 3% of the validation scars, 41% presented medium susceptibility with 22% of the validation scars and 38% presented high and very high susceptibilities with 75% of the validation scars. The very high susceptibility class stands for 16% of the basin area and has 54% of the all scars. The approach used in this study can be considered very useful since 75% of the area affected by landslides was included in the high and very high susceptibility classes.
SEMG analysis of astronaut upper arm during isotonic muscle actions with normal standing posture
NASA Astrophysics Data System (ADS)
Qianxiang, Zhou; Chao, Ma; Xiaohui, Zheng
sEMG analysis of astronaut upper arm during isotonic muscle actions with normal standing posture*1 Introduction Now the research on the isotonic muscle actions by using Surface Electromyography (sEMG) is becoming a pop topic in fields of astronaut life support training and rehabilitations. And researchers paid more attention on the sEMG signal processes for reducing the influence of noise which is produced during monitoring process and the fatigue estimation of isotonic muscle actions with different force levels by using the parameters which are obtained from sEMG signals such as Condition Velocity(CV), Median Frequency(MDF), Mean Frequency(MNF) and so on. As the lucubrated research is done, more and more research on muscle fatigue issue of isotonic muscle actions are carried out with sEMG analysis and subjective estimate system of Borg scales at the same time. In this paper, the relationship between the variable for fatigue based on sEMG and the Borg scale during the course of isotonic muscle actions of the upper arm with different contraction levels are going to be investigated. Methods 13 young male subjects(23.4±2.45years, 64.7±5.43Kg, 171.7±5.41cm) with normal standing postures were introduced to do isotonic actions of the upper arm with different force levels(10% MVC, 30%MVC and 50%MVC). And the MVC which means maximal voluntary contraction was obtained firstly in the experiment. Also the sEMG would be recorded during the experiments; the Borg scales would be recorded for each contraction level. By using one-third band octave method, the fatigue variable (p) based on sEMG were set up and it was expressed as p = i g(fi ) · F (fi ). And g(fi ) is defined as the frequent factor which was 0.42+0.5 cos(π fi /f0 )+0.08 cos(2π fi /f0 ), 0 < FI fi 0, orf0 ≤> f0 . According to the equations, the p could be computed and the relationship between variable p and the Borg scale would be investigated. Results In the research, three kinds of fitted curves between
SEMG analysis of astronaut upper arm during isotonic muscle actions with normal standing posture
NASA Astrophysics Data System (ADS)
Qianxiang, Zhou; Chao, Ma; Xiaohui, Zheng
sEMG analysis of astronaut upper arm during isotonic muscle actions with normal standing posture*1 Introduction Now the research on the isotonic muscle actions by using Surface Electromyography (sEMG) is becoming a pop topic in fields of astronaut life support training and rehabilitations. And researchers paid more attention on the sEMG signal processes for reducing the influence of noise which is produced during monitoring process and the fatigue estimation of isotonic muscle actions with different force levels by using the parameters which are obtained from sEMG signals such as Condition Velocity(CV), Median Frequency(MDF), Mean Frequency(MNF) and so on. As the lucubrated research is done, more and more research on muscle fatigue issue of isotonic muscle actions are carried out with sEMG analysis and subjective estimate system of Borg scales at the same time. In this paper, the relationship between the variable for fatigue based on sEMG and the Borg scale during the course of isotonic muscle actions of the upper arm with different contraction levels are going to be investigated. Methods 13 young male subjects(23.4±2.45years, 64.7±5.43Kg, 171.7±5.41cm) with normal standing postures were introduced to do isotonic actions of the upper arm with different force levels(10% MVC, 30%MVC and 50%MVC). And the MVC which means maximal voluntary contraction was obtained firstly in the experiment. Also the sEMG would be recorded during the experiments; the Borg scales would be recorded for each contraction level. By using one-third band octave method, the fatigue variable (p) based on sEMG were set up and it was expressed as p = i g(fi ) · F (fi ). And g(fi ) is defined as the frequent factor which was 0.42+0.5 cos(π fi /f0 )+0.08 cos(2π fi /f0 ), 0 < FI fi 0, orf0 ≤> f0 . According to the equations, the p could be computed and the relationship between variable p and the Borg scale would be investigated. Results In the research, three kinds of fitted curves between
Chromosome heteromorphism quantified by high-resolution bivariate flow karyotyping.
Trask, B; van den Engh, G; Mayall, B; Gray, J W
1989-11-01
Maternal and paternal homologues of many chromosome types can be differentiated on the basis of their peak position in Hoechst 33258 versus chromomycin A3 bivariate flow karyotypes. We demonstrate here the magnitude of DNA content differences among normal chromosomes of the same type. Significant peak-position differences between homologues were observed for an average of four chromosome types in each of the karyotypes of 98 different individuals. The frequency of individuals with differences in homologue peak positions varied among chromosome types: e.g., chromosome 15, 61%; chromosome 3, 4%. Flow karyotypes of 33 unrelated individuals were compared to determine the range of peak position among normal chromosomes. Chromosomes Y, 21, 22, 15, 16, 13, 14, and 19 were most heteromorphic, and chromosomes 2-8 and X were least heteromorphic. The largest chromosome 21 was 45% larger than the smallest 21 chromosome observed. The base composition of the variable regions differed among chromosome types. DNA contents of chromosome variants determined from flow karyotypes were closely correlated to measurements of DNA content made of gallocyanin chrome alum-stained metaphase chromosomes on slides. Fluorescence in situ hybridization with chromosome-specific repetitive sequences indicated that variability in their copy number is partly responsible for peak-position variability in some chromosomes. Heteromorphic chromosomes are identified for which parental flow karyotype information will be essential if de novo rearrangements resulting in small DNA content changes are to be detected with flow karyotyping. PMID:2479266
Chromosome heteromorphism quantified by high-resolution bivariate flow karyotyping.
Trask, B; van den Engh, G; Mayall, B; Gray, J W
1989-01-01
Maternal and paternal homologues of many chromosome types can be differentiated on the basis of their peak position in Hoechst 33258 versus chromomycin A3 bivariate flow karyotypes. We demonstrate here the magnitude of DNA content differences among normal chromosomes of the same type. Significant peak-position differences between homologues were observed for an average of four chromosome types in each of the karyotypes of 98 different individuals. The frequency of individuals with differences in homologue peak positions varied among chromosome types: e.g., chromosome 15, 61%; chromosome 3, 4%. Flow karyotypes of 33 unrelated individuals were compared to determine the range of peak position among normal chromosomes. Chromosomes Y, 21, 22, 15, 16, 13, 14, and 19 were most heteromorphic, and chromosomes 2-8 and X were least heteromorphic. The largest chromosome 21 was 45% larger than the smallest 21 chromosome observed. The base composition of the variable regions differed among chromosome types. DNA contents of chromosome variants determined from flow karyotypes were closely correlated to measurements of DNA content made of gallocyanin chrome alum-stained metaphase chromosomes on slides. Fluorescence in situ hybridization with chromosome-specific repetitive sequences indicated that variability in their copy number is partly responsible for peak-position variability in some chromosomes. Heteromorphic chromosomes are identified for which parental flow karyotype information will be essential if de novo rearrangements resulting in small DNA content changes are to be detected with flow karyotyping. Images Figure 5 PMID:2479266
Joint association analysis of bivariate quantitative and qualitative traits.
Yuan, Mengdie; Diao, Guoqing
2011-01-01
Univariate genome-wide association analysis of quantitative and qualitative traits has been investigated extensively in the literature. In the presence of correlated phenotypes, it is more intuitive to analyze all phenotypes simultaneously. We describe an efficient likelihood-based approach for the joint association analysis of quantitative and qualitative traits in unrelated individuals. We assume a probit model for the qualitative trait, under which an unobserved latent variable and a prespecified threshold determine the value of the qualitative trait. To jointly model the quantitative and qualitative traits, we assume that the quantitative trait and the latent variable follow a bivariate normal distribution. The latent variable is allowed to be correlated with the quantitative phenotype. Simultaneous modeling of the quantitative and qualitative traits allows us to make more precise inference on the pleiotropic genetic effects. We derive likelihood ratio tests for the testing of genetic effects. An application to the Genetic Analysis Workshop 17 data is provided. The new method yields reasonable power and meaningful results for the joint association analysis of the quantitative trait Q1 and the qualitative trait disease status at SNPs with not too small MAF. PMID:22373162
+Gz tolerance in man after 14-day bedrest periods with isometric and isotonic exercise conditioning.
Greenleaf, J E; Bernauer, E M; Morse, J T; Sandler, H; Armbruster, R; Sagan, L; van Beaumont, W
1975-05-01
The purpose of this study was to determine the effects of isometric or isotonic exercise training on post-bedrest +Gz tolerance. Seven male volunteers, 19-22 years, underwent accelerations of +2.1 Gz (740 s), +3.2 Gz (327 s), and +3.8 Gz (312 s) in a selected, randomized order; the ramp to peak acceleration was 1.8 G/min. The centrifugation runs were terminated by loss of central vision (blackout) to a white light with a luminance of 3.15 times 10-5 log candle/cm-2 (0.092 ft-lambert). The study began with a 14-d ambulatory control period, followed by three 14-d bedrest periods (each separated by a 21-d recovery period) and then a final week of recovery. During the ambulatory periods, the subjects exercised on a bicycle ergometer at 50% of their maximal oxygen uptake (max VO2) for 1 h/d. During two of the three bedrest periods, the subjects performed in the supine position one of two routines, either isometric exercise (21% of max leg extension force for 1 min followed by 1-min rest) or isotonic exercise (68% of max VO2) for 0.5 in the morning and afternoon. During the third bedrest period, no exercise was performed. In general +Gz tolerance was reduced by 24% to 35% (p less than or equal to 0.05) after bedrest. Compared with control values, there were significant reductions in average tolerance times after bedrest with no exercise and isotonic exercise at all G levels. With isometric exercise, there was a significant decrease in tolerance at 2.1 Gz but not at 3.2 Gz or 3.8 Gz, even though the latter tolerances were reduced 15.6% and 10.0%, respectively. Both exercise regimens maintained tolerance at levels equal to or above that obtained with no exercise. Compared with control values, average tolerances were lower (p less than or equal to 0.05) after the two recovery periods between the bedrest periods (minus 24% to minus 26% at 3.2 Gz and 3.8 Gz), indicating that 3 weeks of ambulation was not sufficient time for full recovery from the deconditioning induced in
Sonka, Milan; Abramoff, Michael D.
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR. PMID:24222760
Uncertain Characterization of Flood Hazard Using Bivariate Analysis Based on Copulas
NASA Astrophysics Data System (ADS)
Candela, Angela; Tito Aronica, Giuseppe
2015-04-01
This study presents a methodology to derive probabilistic flood hazard map in flood prone areas taking into account uncertainties in the definition of design-hydrographs. Particularly, we present an innovative approach to obtain probabilistic inundation and flood hazard maps where hydrological input (synthetic flood design event) to a 2D hydraulic model has been defined by generating flood peak discharges and volumes from a bivariate statistical analysis, through the use of copulas. This study also aims to quantify the contribution of boundary conditions uncertainty in order to explore the impact of this uncertainty on probabilistic flood hazard mapping. The uncertainty of extreme flood events is considered in terms of different possible combinations of peak discharge and flood volume given by the copula. Further, we analyzed the role of a multivariate probability hydrological analysis on inundation and flood hazard maps highlighting the differences between deterministic and probabilistic approaches. The methodology has been applied to a study area located in Sicily that was subject to several flooding events in the past.
Wen, Yalu; Schaid, Daniel J; Lu, Qing
2013-04-01
Although comorbidity among complex diseases (e.g., drug dependence syndromes) is well documented, genetic variants contributing to the comorbidity are still largely unknown. The discovery of genetic variants and their interactions contributing to comorbidity will likely shed light on underlying pathophysiological and etiological processes, and promote effective treatments for comorbid conditions. For this reason, studies to discover genetic variants that foster the development of comorbidity represent high-priority research projects, as manifested in the behavioral genetics studies now underway. The yield from these studies can be enhanced by adopting novel statistical approaches, with the capacity of considering multiple genetic variants and possible interactions. For this purpose, we propose a bivariate Mann-Whitney (BMW) approach to unravel genetic variants and interactions contributing to comorbidity, as well as those unique to each comorbid condition. Through simulations, we found BMW outperformed two commonly adopted approaches in a variety of underlying disease and comorbidity models. We further applied BMW to datasets from the Study of Addiction: Genetics and Environment, investigating the contribution of 184 known nicotine dependence (ND) and alcohol dependence (AD) single nucleotide polymorphisms (SNPs) to the comorbidity of ND and AD. The analysis revealed a candidate SNP from CHRNA5, rs16969968, associated with both ND and AD, and replicated the findings in an independent dataset with a P-value of 1.06 × 10(-03) . PMID:23334941
A semiparametric bivariate probit model for joint modeling of outcomes in STEMI patients.
Ieva, Francesca; Marra, Giampiero; Paganoni, Anna Maria; Radice, Rosalba
2014-01-01
In this work we analyse the relationship among in-hospital mortality and a treatment effectiveness outcome in patients affected by ST-Elevation myocardial infarction. The main idea is to carry out a joint modeling of the two outcomes applying a Semiparametric Bivariate Probit Model to data arising from a clinical registry called STEMI Archive. A realistic quantification of the relationship between outcomes can be problematic for several reasons. First, latent factors associated with hospitals organization can affect the treatment efficacy and/or interact with patient's condition at admission time. Moreover, they can also directly influence the mortality outcome. Such factors can be hardly measurable. Thus, the use of classical estimation methods will clearly result in inconsistent or biased parameter estimates. Secondly, covariate-outcomes relationships can exhibit nonlinear patterns. Provided that proper statistical methods for model fitting in such framework are available, it is possible to employ a simultaneous estimation approach to account for unobservable confounders. Such a framework can also provide flexible covariate structures and model the whole conditional distribution of the response. PMID:24799953
Segmentation of 3D holographic images using bivariate jointly distributed region snake
NASA Astrophysics Data System (ADS)
Daneshpanah, Mehdi; Javidi, Bahram
2006-06-01
In this paper, we describe the bivariate jointly distributed region snake method in segmentation of microorganisms in Single Exposure On- Line (SEOL) holographic microscopy images. 3D images of the microorganisms are digitally reconstructed and numerically focused from any arbitrary depth from a single recorded digital hologram without mechanical scanning. Living organisms are non-rigid and they vary in shape and size. Moreover, they often do not exhibit clear edges in digitally reconstructed SEOL holographic images. Thus, conventional segmentation techniques based on the edge map may fail to segment these images. However, SEOL holographic microscopy provides both magnitude and phase information of the sample specimen, which could be helpful in the segmentation process. In this paper, we present a statistical framework based on the joint probability distribution of magnitude and phase information of SEOL holographic microscopy images and maximum likelihood estimation of image probability density function parameters. An optimization criterion is computed by maximizing the likelihood function of the target support hypothesis. In addition, a simple stochastic algorithm has been adapted for carrying out the optimization, while several boosting techniques have been employed to enhance its performance. Finally, the proposed method is applied for segmentation of biological microorganisms in SEOL holographic images and the experimental results are presented.
A relativistic continuum Hartree-Bogoliubov theory description of N = 3 isotones
NASA Astrophysics Data System (ADS)
Han, Rui; Ji, Juan-Xia; Li, Jia-Xing
2011-09-01
The ground-state properties of N = 3 isotones and mirror nuclei have been investigated in the Rrelativistic Continuum Hartree-Bogoliubov theory with the NLSH effective interaction. Pairing correlations are taken into account by a density-dependent δ-force. The calculations show that the proton density distributions of 8B and 9C have a long tail, the core has an increasing tendency of 9C and the paired off valence protons make the halo distribution shrink. The cross sections for the 8B(9C)+12C reaction which are consistent with the experimental data are calculated using the Glauber model. On the whole, we think that 8B is a one-proton halo nucleus and 9C is a two-proton halo nucleus.
Study of oxygen isotopes and N=8 isotones with an extended cluster-orbital shell model
NASA Astrophysics Data System (ADS)
Masui, H.; Katō, K.; Ikeda, K.
2006-03-01
We attempt to obtain a unified description of the bound and unbound states of multivalence nucleons of a core system in the framework of the cluster-orbital shell model (COSM). In this framework, the interaction between the core and a valence nucleon (the core-N interaction) is treated microscopically, and the changes in both the core structure and the core-N interaction are discussed on the same basis. Furthermore, the center-of-mass motion of every nucleon is completely eliminated, and higher shell configurations, including unbound continuum components, are appropriately taken into account by applying a stochastic variational approach. To examine the reliability of this approach and to discuss how the dynamics of the core reflects to the total system, we study oxygen isotopes and N=8 isotones, which are described by O16 + Xn and O16 + Xp models, respectively.
Kuriyama, H; Oshima, K; Sakamoto, Y
1971-08-01
The membrane properties of the longitudinal smooth muscle of the guinea-pig portal vein were investigated under various experimental conditions.1. In isotonic Krebs solution, the membrane potential (-48.7 mV), the maximum rates of rise and fall of the spike (4.6 and 2.3 V/sec respectively), the space constant (0.61 mm), the conduction velocity of excitation (0.97 cm/sec) and the time constant of the foot of the propagated spike (18.4 msec) were measured.2. The various parameters of the muscle membrane in the isotonic solution were compared with those in the hypertonic solution prepared by the addition of solid sucrose (twice the normal tonicity).3. When the muscles were perfused with hypertonic solution, marked depolarization of the membrane and increased membrane resistance occurred. These were probably due to reduction of the K permeability, increased internal resistance of the muscle and shrinkage of the muscle fibre.4. The membrane potential in isotonic and hypertonic solutions was analysed into two components, i.e. the metabolic (electrogenic Na-pump) and the ionic (electrical diffusion potential) component in the various environmental conditions.(a) In isotonic and hypertonic solutions, the membrane was depolarized by lowering the temperature or by removal of K ion from the solutions. When the tissues were rewarmed or on readdition of K ion, the membrane was markedly hyperpolarized. These hyperpolarizations of the membrane were suppressed by treatment with ouabain (10(-5) g/ml.), by warming to only 20 degrees C and by K-free solution.(b) The relationships between the membrane potential and the [K](o) in isotonic Krebs, in the hypertonic (sucrose) Krebs, in the Na-free (Tris) Krebs and in the Cl-deficient (C(6)H(5)SO(3)) Krebs were observed. The maximum slopes of the membrane depolarization against tenfold changes of [K](o) were much lower than that expected if it behaved like a K electrode.(c) In Na-free (Tris) solution, the membrane was not depolarized in
Nucleon-pair states of even-even N =82 isotones
NASA Astrophysics Data System (ADS)
Cheng, Y. Y.; Zhao, Y. M.; Arima, A.
2016-08-01
In this paper we study low-lying states of five N =82 isotones, 134Te, 136Xe, 138Ba, 140Ce and 142Nd, within the framework of the nucleon-pair approximation (NPA). For the low-lying yrast states of 136Xe and 138Ba, we calculate the overlaps between the wave functions obtained in the full shell-model (SM) space and those obtained in the truncated NPA space, and find that most of these overlaps are very close to 1. Very interestingly and surprisingly, for most of these yrast states, the SM wave functions are found to be well approximated by one-dimensional, optimized pair basis states, which indicates a simple picture of "nucleon-pair states". The positive-parity yrast states with spin J >6 in these nuclei, as well as the 82+ state, are found to be well described by breaking one or two S pair(s) of the 61+ or 62+ state (low-lying, seniority-two, spin-maximum, and positive-parity); similarly, negative-parity yrast states with spin J >9 are well represented by breaking one or two S pair(s) of the 91- state (low-lying, seniority-two, spin-maximum, and negative-parity). It is shown that the low-lying negative-parity yrast states of 136Xe and 138Ba are reasonably described to be one-octupole-phonon excited states. The evolution of the 61+ and 62+ states for the five isotones are also systematically investigated.
Asymptotics of bivariate generating functions with algebraic singularities
NASA Astrophysics Data System (ADS)
Greenwood, Torin
Flajolet and Odlyzko (1990) derived asymptotic formulae the coefficients of a class of uni- variate generating functions with algebraic singularities. Gao and Richmond (1992) and Hwang (1996, 1998) extended these results to classes of multivariate generating functions, in both cases by reducing to the univariate case. Pemantle and Wilson (2013) outlined new multivariate ana- lytic techniques and used them to analyze the coefficients of rational generating functions. After overviewing these methods, we use them to find asymptotic formulae for the coefficients of a broad class of bivariate generating functions with algebraic singularities. Beginning with the Cauchy integral formula, we explicity deform the contour of integration so that it hugs a set of critical points. The asymptotic contribution to the integral comes from analyzing the integrand near these points, leading to explicit asymptotic formulae. Next, we use this formula to analyze an example from current research. In the following chapter, we apply multivariate analytic techniques to quan- tum walks. Bressler and Pemantle (2007) found a (d + 1)-dimensional rational generating function whose coefficients described the amplitude of a particle at a position in the integer lattice after n steps. Here, the minimal critical points form a curve on the (d + 1)-dimensional unit torus. We find asymptotic formulae for the amplitude of a particle in a given position, normalized by the number of steps n, as n approaches infinity. Each critical point contributes to the asymptotics for a specific normalized position. Using Groebner bases in Maple again, we compute the explicit locations of peak amplitudes. In a scaling window of size the square root of n near the peaks, each amplitude is asymptotic to an Airy function.
Causal networks clarify productivity-richness interrelations, bivariate plots do not
Grace, James B.; Adler, Peter B.; Harpole, W. Stanley; Borer, Elizabeth T.; Seabloom, Eric W.
2014-01-01
We urge ecologists to consider productivity–richness relationships through the lens of causal networks to advance our understanding beyond bivariate analysis. Further, we emphasize that models based on a causal network conceptualization can also provide more meaningful guidance for conservation management than can a bivariate perspective. Measuring only two variables does not permit the evaluation of complex ideas nor resolve debates about underlying mechanisms.
Description of the evolution of mixed-symmetry states in the N = 78 isotonic chain with IBM2
NASA Astrophysics Data System (ADS)
Zhang, Da-Li; Yuan, Shu-Qing; Ding, Bin-Gang
2015-07-01
The characteristics of the lowest mixed-symmetry states 2+ms and 1+ms for 132Xe, 134Ba and 136Ce in the even-even N = 78 isotones are investigated within the framework of the IBM2 model. The lowest mixed-symmetry state 2+ms levels for both a single isolated state in 132Xe and 136Ce and a fragmented state in 134Ba are reproduced by the predictions. The agreement between the IBM2 calculation and the experimental values is good for the B(E2) and B(M1) transition probabilities both quantitatively and qualitatively. The predicted summed B(M1) strength follows the experimental data, remaining nearly constant as a function of proton number along the chain of the N = 78 isotones. Supported by National Natural Science Foundation of China (11475062) and Natural Science Foundation of Zhejiang Province, China (KY6100135)
Experimental study of β and β -n decay of the neutron-rich N =54 isotone 87As
NASA Astrophysics Data System (ADS)
Korgul, A.; Rykaczewski, K. P.; Grzywacz, R.; Bingham, C. R.; Brewer, N. T.; Ciemny, A. A.; Gross, C. J.; Jost, C.; Karny, M.; Madurga, M.; Mazzocchi, C.; Mendez, A. J.; Miernik, K.; Miller, D.; Padgett, S.; Paulauskas, S. V.; Stracener, D. W.; Wolińska-Cichocka, M.
2015-11-01
The β -decay properties of neutron-rich 87As produced in the proton-induced fission of 238U were studied at the Holifield Radioactive Ion Beam Facility at Oak Ridge National Laboratory. The low-energy excited states in N =53 87Se and N =52 86Se were identified through β -γ and β -delayed neutron-γ decay of 87As, respectively. The experimental systematics of low-energy levels of N =53 isotones, Z =34 Se87, and Z =32 85Ge, and along with an analysis of shell-model calculations, allow us to discuss the main features of excited states expected for the next N =53 isotone, 83Zn.
Gironés-Vilaplana, Amadeo; Villaño, Débora; Moreno, Diego A; García-Viguera, Cristina
2013-11-01
The aim of the study was to design new isotonic drinks with lemon juice and berries: maqui [Aristotelia chilensis (Molina) Stuntz], açaí (Euterpe oleracea Mart.) and blackthorn (Prunus spinosa L.), following on from previous research. Quality parameters - including colour (CIELab parameters), minerals, phytochemical identification and quantification by high-performance liquid chromatography with diode array detector, total phenolic content by the Folin-Ciocalteu reagent, the antioxidant capacity (ABTS(+), DPPH• and [Formula: see text] assays) and biological activities (in vitro alpha-glucosidase and lipase inhibitory effects) - were tested in the samples and compared to commercially available isotonic drinks. The new isotonic blends with lemon and anthocyanins-rich berries showed an attractive colour, especially in maqui samples, which is essential for consumer acceptance. Significantly higher antioxidant and biological effects were determined in the new blends, in comparison with the commercial isotonic beverages. PMID:23815554
A rank test for bivariate time-to-event outcomes when one event is a surrogate.
Shaw, Pamela A; Fay, Michael P
2016-08-30
In many clinical settings, improving patient survival is of interest but a practical surrogate, such as time to disease progression, is instead used as a clinical trial's primary endpoint. A time-to-first endpoint (e.g., death or disease progression) is commonly analyzed but may not be adequate to summarize patient outcomes if a subsequent event contains important additional information. We consider a surrogate outcome very generally as one correlated with the true endpoint of interest. Settings of interest include those where the surrogate indicates a beneficial outcome so that the usual time-to-first endpoint of death or surrogate event is nonsensical. We present a new two-sample test for bivariate, interval-censored time-to-event data, where one endpoint is a surrogate for the second, less frequently observed endpoint of true interest. This test examines whether patient groups have equal clinical severity. If the true endpoint rarely occurs, the proposed test acts like a weighted logrank test on the surrogate; if it occurs for most individuals, then our test acts like a weighted logrank test on the true endpoint. If the surrogate is a useful statistical surrogate, our test can have better power than tests based on the surrogate that naively handles the true endpoint. In settings where the surrogate is not valid (treatment affects the surrogate but not the true endpoint), our test incorporates the information regarding the lack of treatment effect from the observed true endpoints and hence is expected to have a dampened treatment effect compared with tests based on the surrogate alone. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. PMID:27059817
Role of atrial natriuretic peptide in systemic responses to acute isotonic volume expansion
NASA Technical Reports Server (NTRS)
Watenpaugh, Donald E.; Yancy, Clyde W.; Buckey, Jay C.; Lane, Lynda D.; Hargens, Alan R.; Blomqvist, C. G.
1992-01-01
A hypothesis is proposed that a temporal relationship exists between increases in cardiac filling pressure and plasma artrial natriuretic peptide (ANP) concentration and also between ANP elevation and vasodilation, fluid movement from plasma to interstitium, and increased urine volume (UV). To test the hypothesis, 30 ml/kg isotonic saline were infused in supine male subjects over 24 min and responses were monitored for 3 h postinfusion. Results show that at end infusion, mean arterial pressure (RAP), heart rate and plasma volume exhibited peak increases of 146, 23, and 27 percent, respectively. Mean plasma ANP and UV peaked (45 and 390 percent, respectively) at 30 min postinfusion. Most cardiovascular variables had returned toward control levels by 1 h postinfusion, and net reabsorption of extravascular fluid ensued. It is concluded that since ANP was not significantly increased until 30 min postinfusion, factors other than ANP initiate responses to intravascular fluid loading. These factors include increased vascular pressures, baroreceptor-mediated vasolidation, and hemodilution of plasma proteins. ANP is suggested to mediate, in part, the renal response to saline infusion.
A Model of Peritubular Capillary Control of Isotonic Fluid Reabsorption by the Renal Proximal Tubule
Deen, W. M.; Robertson, C. R.; Brenner, B. M.
1973-01-01
A mathematical model of peritubular transcapillary fluid exchange has been developed to investigate the role of the peritubular environment in the regulation of net isotonic fluid transport across the mammalian renal proximal tubule. The model, derived from conservation of mass and the Starling transcapillary driving forces, has been used to examine the quantitative effects on proximal reabsorption of changes in efferent arteriolar protein concentration and plasma flow rate. Under normal physiological conditions, relatively small perturbations in protein concentration are predicted to influence reabsorption more than even large variations in plasma flow, a prediction in close accord with recent experimental observations in the rat and dog. Changes either in protein concentration or plasma flow have their most pronounced effects when the opposing transcapillary hydrostatic and osmotic pressure differences are closest to equilibrium. Comparison of these theoretical results with variations in reabsorption observed in micropuncture studies makes it possible to place upper and lower bounds on the difference between interstitial oncotic and hydrostatic pressures in the renal cortex of the rat. PMID:4696761
Nomura, Toshiyuki; Tani, Shuji; Yamamoto, Makoto; Nakagawa, Takumi; Toyoda, Shunsuke; Fujisawa, Eri; Yasui, Akiko; Konishi, Yasuhiro
2016-04-01
The effects of surface physicochemical properties of functionalized polystyrene latex (PSL) nanoparticles (NPs) and model filamentous fungi Aspergillus oryzae and Aspergillus nidulans cultivated in different environment (aqueous and atmospheric environment) on the colloidal behavior and cytotoxicity were investigated in different isotonic solutions (154 mM NaCl and 292 mM sucrose). When the liquid cultivated fungal cells were exposed to positively charged PSL NPs in 154 mM NaCl solution, the NPs were taken into A. oryzae, but not A. nidulans. Atomic force microscopy revealed that the uptake of NPs was more readily through the cell wall of A. oryzae because of its relatively softer cell wall compared with A. nidulans. In contrast, the positively charged PSL NPs entirely covered the liquid cultivated fungal cell surfaces and induced cell death in 292 mM sucrose solution because of the stronger electrostatic attractive force between the cells and NPs compared with in 154 mM NaCl. When the agar cultivated fungal cells were exposed to the positively charged PSL NPs, both fungal cells did not take the NPs inside the cells. Contact angle measurement revealed that the hydrophobin on the agar cultivated cell surfaces inhibited the uptake of NPs because of its relatively more hydrophobic cell surface compared with the liquid cultivated cells. PMID:26855210
A Review of Classification Techniques of EMG Signals during Isotonic and Isometric Contractions.
Nazmi, Nurhazimah; Abdul Rahman, Mohd Azizi; Yamamoto, Shin-Ichiroh; Ahmad, Siti Anom; Zamzuri, Hairi; Mazlan, Saiful Amri
2016-01-01
In recent years, there has been major interest in the exposure to physical therapy during rehabilitation. Several publications have demonstrated its usefulness in clinical/medical and human machine interface (HMI) applications. An automated system will guide the user to perform the training during rehabilitation independently. Advances in engineering have extended electromyography (EMG) beyond the traditional diagnostic applications to also include applications in diverse areas such as movement analysis. This paper gives an overview of the numerous methods available to recognize motion patterns of EMG signals for both isotonic and isometric contractions. Various signal analysis methods are compared by illustrating their applicability in real-time settings. This paper will be of interest to researchers who would like to select the most appropriate methodology in classifying motion patterns, especially during different types of contractions. For feature extraction, the probability density function (PDF) of EMG signals will be the main interest of this study. Following that, a brief explanation of the different methods for pre-processing, feature extraction and classifying EMG signals will be compared in terms of their performance. The crux of this paper is to review the most recent developments and research studies related to the issues mentioned above. PMID:27548165
Charge density distributions and charge form factors of the N=82 and N=126 isotonic nuclei
Wang Zaijun; Fan Ying; Ren Zhongzhou
2006-01-15
Charge form factors for N=82 and N=126 isotonic nuclei are calculated with the relativistic eikonal approximation, in which the charge density distributions are from the relativistic mean-field theory. The variations of charge form factors with proton number are discussed in detail. It is found that the most sensitive parts of the charge form factors are those around the minimums and maximums. For an increasing proton number, the charge form factors near the extrema have an upward shift. As the protons increase and occupy a new shell, the minimums and maximums of the charge form factors could also have a significant inward shift. The results can be useful for the study of behaviors of valence-proton wave functions for such nuclei as can be considered as a core plus proton(s), and thus the proton-halo phenomenon. In addition, the results can also be useful for future electron-unstable nucleus scattering experiments and provide tests of the reliability of the relativistic mean-field theory for the unstable nuclei.
Hasstedt, Sandra J; Hanis, Craig L; Elbein, Steven C
2010-01-01
Summary Dyslipidemia frequently co-occurs with type 2 diabetes (T2D) and with obesity. To investigate whether the co-occurrence is due to pleiotropic genes, we performed univariate linkage analysis of lipid levels and bivariate linkage analysis of pairs of lipid levels and of lipid levels paired with T2D, body mass index (BMI), and waist-hip ratio (WHR) in the African American subset of the Genetics of NIDDM (GENNID) sample. We obtained significant evidence for a pleiotropic low density lipoprotein cholesterol (LDL-C)–T2D locus on chromosome 1 at 16–19 megabases (MB) (bivariate lod = 4.41), as well as a non-pleiotropic triglyceride (TG) locus on chromosome 20 at 28–34 MB (univariate lod = 3.57). In addition, near-significant evidence supported TG–T2D loci on chromosome 2 at 81–101 MB (bivariate lod = 4.23) and 232–239 MB (bivariate lod = 4.27) and on chromosome 7 at 147–151 MB (univariate lod = 3.08 for TG with P = 0.041 supporting pleiotropy with T2D), as well as an LDL-C–BMI locus on chromosome 3 at 137–147 MB (bivariate lod score = 4.25). These finding provide evidence that at least some of the co-occurrence of dyslipidemia with T2D and obesity is due to common underlying genes. PMID:20597901
Del Prete, Zaccaria; Musarò, Antonio; Rizzuto, Emanuele
2008-07-01
Contractile properties of fast-twitch (EDL) and slow-twitch (soleus) skeletal muscles were measured in MLC/mIgf-1 transgenic and wild-type mice. MLC/mIgf-1 mice express the local factor mIgf-1 under the transcriptional control of MLC promoter, selectively activated in fast-twitch muscle fibers. Isolated muscles were studied in vitro in both isometric and isotonic conditions. We used a rapid "ad hoc" testing protocol that measured, in a single procedure, contraction time, tetanic force, Hill's (F-v) curve, power curve and isotonic muscle fatigue. Transgenic soleus muscles did not differ from wild-type with regard to any measured variable. In contrast, transgenic EDL muscles displayed a hypertrophic phenotype, with a mass increase of 29.2% compared to wild-type. Absolute tetanic force increased by 21.5% and absolute maximum power by 34.1%. However, when normalized to muscle cross-sectional area and mass, specific force and normalized power were the same in transgenic and wild-type EDL muscles, revealing that mIgf-1 expression induces a functional hypertrophy without altering fibrotic tissue accumulation. Isotonic fatigue behavior did not differ between transgenic and wild-type muscles, suggesting that the ability of mIgf-1 transgenic muscle to generate a considerable higher absolute power did not affect its resistance to fatigue. PMID:18415017
Hahn, R G; Isacson, M Nyberg; Fagerström, T; Rosvall, J; Nyman, C R
2016-02-01
Isotonic saline is a widely-used infusion fluid, although the associated chloride load may cause metabolic acidosis and impair kidney function in young, healthy volunteers. We wished to examine whether these effects also occurred in the elderly, and conducted a crossover study in 13 men with a mean age of 73 years (range 66-84), who each received intravenous infusions of 1.5 l of Ringer's acetate and of isotonic saline. Isotonic saline induced mild changes in plasma sodium (mean +1.5 mmol.l(-1) ), plasma chloride (+3 mmol.l(-1) ) and standard bicarbonate (-2 mmol.l(-1) ). Three hours after starting the infusions, 68% of the Ringer's acetate and 30% of the infused saline had been excreted (p < 0.01). The glomerular filtration rate increased in response to both fluids, but more after the Ringer's acetate (p < 0.03). Pre-infusion fluid retention, as evidenced by high urinary osmolality (> 700 mOsmol.kg(-1) ) and/or creatinine (> 7 mmol.l(-1) ), was a strong factor governing the responses to both fluid loads. PMID:26669730
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)
NASA Astrophysics Data System (ADS)
Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.
2013-12-01
We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
BIVARIATE MODELLING OF CLUSTERED CONTINUOUS AND ORDERED CATEGORICAL OUTCOMES. (R824757)
Simultaneous observation of continuous and ordered categorical outcomes for each subject is common in biomedical research but multivariate analysis of the data is complicated by the multiple data types. Here we construct a model for the joint distribution of bivariate continuous ...
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.
1981-01-01
A model of the largest gust amplitude and gust length is presented which uses the properties of the bivariate gamma distribution. The gust amplitude and length are strongly dependent on the filter function; the amplitude increases with altitude and is larger in winter than in summer.
A Method for Determining If Unequal Shape Parameters are Necessary in a Bivariate Gamma Distribution
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1983-01-01
A procedure to aid in the deciding between four and five parameters in a Jensen's type bivariate gamma distribution is presented. It is based upon the CDF of the ratio of correlated gamma distributed variates. The criteria are posed in a test of hypothesis setting and results are presented.
NASA Technical Reports Server (NTRS)
Bernauer, E. M.; Walby, W. F.; Ertl, A. C.; Dempster, P. T.; Bond, M.; Greenleaf, J. E.
1994-01-01
To determine if daily isotonic exercise or isokinetic exercise training coupled with daily leg proprioceptive training, would influence leg proprioceptive tracking responses during bed rest (BR), 19 men (36 +/- SD 4 years, 178 +/- 7 cm, 76.8 +/- 7.8 kg) were allocated into a no-exercise (NOE) training control group (n = 5), and isotonic exercise (ITE, n = 7) and isokinetic exercise (IKE, n = 7) training groups. Exercise training was conducted during BR for two 30-min periods.d-1, 5 d.week-1. Only the IKE group performed proprioceptive training using a new isokinetic procedure with each lower extremity for 2.5 min before and after the daily exercise training sessions; proprioceptive testing occurred weekly for all groups. There were no significant differences in proprioceptive tracking scores, expressed as a percentage of the perfect score of 100, in the pre-BR ambulatory control period between the three groups. Knee extension and flexion tracking responses were unchanged with NOE during BR, but were significantly greater (*p < 0.05) at the end of BR in both exercise groups when compared with NOE responses (extension: NOE 80.7 +/- 0.7%, ITE 82.9* +/- 0.6%, IKE 86.5* +/- 0.7%; flexion: NOE 77.6 +/- 1.5%, ITE 80.0 +/- 0.8% (NS), IKE 83.6* +/- 0.8%). Although proprioceptive tracking was unchanged during BR with NOE, both isotonic exercise training (without additional proprioceptive training) and especially isokinetic exercise training when combined with daily proprioceptive training, significantly improved knee proprioceptive tracking responses after 30 d of BR.
Eskelinen, S; Mela, M
1984-12-01
The osmotic behaviour of erythrocytes under the influence of lysophosphatidylcholine (LPC) was investigated at temperatures of +4 degrees C and 20 degrees C by allowing them either to swell rapidly in hypotonic media in the presence of LPC or to swell gradually at first and then interact with LPC. Prelytic potassium release, the degree of hemolysis and the cell volume under various osmotic conditions were measured, together with the 'returning volumes', i.e. the volumes in an isotonic solution to which the cells returned from that in the hypotonic solution. LPC had a hemolyzing effect on erythrocytes in an isotonic medium and in slightly hypotonic media under all the osmotic conditions investigated, and the degree of hemolysis increased with increasing concentrations of LPC or decreasing temperatures, being greater during gradual than during rapid swelling. LPC also produced a prelytic leakage of potassium connected with the decrease in cell volume in an isotonic medium and in 'returning volumes' in all the media and under all the osmotic conditions investigated. The semipermeability of the membrane was preserved in all these cases, however, for osmotic swelling of the erythrocytes was observed, although to a lesser extent than without LPC. During rapid swelling both the curves for the prelytic potassium leakage and the degree of hemolysis were shifted towards more dilute solutions. Since the critical hemolytic volume was not increased, the shift in potassium leakage and hemolysis caused by LPC may be due to increased rigidity in the cell membrane. The curves for both potassium leakage and hemolysis shifted towards more concentrated solutions during gradual swelling, perhaps due to increased membrane fragmentation. PMID:6524395
Uvelius, B
1976-03-01
Isometric and isotonic length-tension relations of longitudinal smooth muscle from rabbit urinary bladder were studied together with muscle cell length and tissue structure as revealed histologically. In vivo strip length at a bladder volume of 10 m1 is referred to as L10. The smooth muscle was relaxed by Ca2+-free solution and contracted by K+-high solution with different Ca2+-concentrations. Maximal active force, 12.5+/-0.4 N/cm2 (S.E., n =11), for wholestrips was attained at a length of 206+/-4% (S.E., n=5) of L10. Passive tension at this length was about 15% of total tension. After correction for amount of connective tissue, maximal active tension of pure muscle bundles was 19 N/cm2. Up to about 165% of L10 isometric and isotonic length-tension relations were identical; if the muscle was stretched beyond this, it failed to shorten isotonically to the same length as when contracting from a shorter starting length. This decreased shortening capacity was reversible if the muscle was shortened passively. The extent of shortening against zero load was dependent on degree of activation suggesting an internal resistance to shortening. A linear relationship was found between bladder radius and muscle cell length, indicating that no slippage occurs between the cells when the bladder is filled. Mean cell diameter in the nuclear regionat L10 was 7.2+/-0.2 mum (S.D.,n=10). Mean macimal active tension per cell was calculated to be 2.3-10(-6) N and occurred at a cell length of 655 mum. PMID:818878
High-spin states in the semimagic nucleus 89Y and neutron-core excitations in the N =50 isotones
NASA Astrophysics Data System (ADS)
Li, Z. Q.; Wang, S. Y.; Niu, C. Y.; Qi, B.; Wang, S.; Sun, D. P.; Liu, C.; Xu, C. J.; Liu, L.; Zhang, P.; Wu, X. G.; Li, G. S.; He, C. Y.; Zheng, Y.; Li, C. B.; Yu, B. B.; Hu, S. P.; Yao, S. H.; Cao, X. P.; Wang, J. L.
2016-07-01
The semimagic nucleus 89Y 89 has been investigated using the 82Se(11>B,4 n ) reaction at beam energies of 48 and 52 MeV. More than 24 new transitions have been identified, leading to a considerable extension of the level structures of 89Y. The experimental results are compared with the large-basis shell model calculations. They show that cross-shell neutron excitations play a pivotal role in high-spin level structures of 89Y. The systematic features of neutron-core excitations in the N =50 isotones are also discussed.
Morales, A. I.; Benlliure, J.; Alvarez-Pol, H.; Casarejos, E.; Dragosavac, D.; Perez-Loureiro, D.; Verma, S.; Agramunt, J.; Molina, F.; Rubio, B.; Algora, A.; Alkhomashi, N.; Farrelly, G.; Gelletly, W.; Pietri, S.; Podolyak, Z.; Regan, P. H.; Steer, S. J.; Boutachkov, P.; Caceres, L. S.
2011-07-15
The production cross sections of four N=127 isotones ({sup 207}Hg, {sup 206}Au, {sup 205}Pt, and {sup 204}Ir) have been measured using (p,n) charge-exchange reactions, induced in collisions of a {sup 208}Pb primary beam at 1 A GeV with a Be target. These data allow one to investigate the use of a reaction mechanism to extend the limits of the chart of nuclides toward the important r-process nuclei in the region of the third peak of elemental abundance distribution.
A new spreadsheet method for the analysis of bivariate flow cytometric data
Tzircotis, George; Thorne, Rick F; Isacke, Clare M
2004-01-01
Background A useful application of flow cytometry is the investigation of cell receptor-ligand interactions. However such analyses are often compromised due to problems interpreting changes in ligand binding where the receptor expression is not constant. Commonly, problems are encountered due to cell treatments resulting in altered receptor expression levels, or when cell lines expressing a transfected receptor with variable expression are being compared. To overcome this limitation we have developed a Microsoft Excel spreadsheet that aims to automatically and effectively simplify flow cytometric data and perform statistical tests in order to provide a clearer graphical representation of results. Results To demonstrate the use and advantages of this new spreadsheet method we have investigated the binding of the transmembrane adhesion receptor CD44 to its ligand hyaluronan. In the first example, phorbol ester treatment of cells results in both increased CD44 expression and increased hyaluronan binding. By applying the spreadsheet method we effectively demonstrate that this increased ligand binding results from receptor activation. In the second example we have compared AKR1 cells transfected either with wild type CD44 (WT CD44) or a mutant with a truncated cytoplasmic domain (CD44-T). These two populations do not have equivalent receptor expression levels but by using the spreadsheet method hyaluronan binding could be compared without the need to generate single cell clones or FACS sorting the cells for matching CD44 expression. By this method it was demonstrated that hyaluronan binding requires a threshold expression of CD44 and that this threshold is higher for CD44-T. However, at high CD44-T expression, binding was equivalent to WT CD44 indicating that the cytoplasmic domain has a role in presenting the receptor at the cell surface in a form required for efficient hyaluronan binding rather than modulating receptor activity. Conclusion Using the attached
Systematics of isomeric configurations in N=77 odd-Z isotones near the proton drip line
Tantawy, M.N.; Danchev, M.; Hartley, D.J.; Mazzocchi, C.; Bingham, C.R.; Grzywacz, R.; Rykaczewski, K.P.; Gross, C.J.; Shapira, D.; Yu, C.-H.; Batchelder, J.C.; Krolas, W.; Fong, D.; Hamilton, J. H.; Li, K.; Ramayya, A. V.; Ginter, T.N.; Stolz, A.; Hagino, K.; Karny, M.
2006-02-15
The systematics of the {pi}h{sub 11/2}x{nu}h{sub 11/2} and {pi}h{sub 11/2}x{nu}s{sub 1/2} isomeric configurations was studied for the odd-Z N=77 isotones near the proton drip line. The isomeric decays in {sup 140}Eu, {sup 142}Tb, {sup 144}Ho, and {sup 146}Tm were measured by means of x-ray, {gamma}-ray, and charged particle spectroscopy at the Recoil Mass Spectrometer at the Holifield Radioactive Ion Beam Facility (ORNL). The spin and parity of I{sup {pi}}=8{sup +} and 5{sup -} were deduced for the isomers in {sup 140}Eu and {sup 142}Tb. New decay schemes were established, and the half-lives of the 8{sup +} isomers were measured to be 302(4) ns for {sup 140m2}Eu and 25(1) {mu}s for {sup 142m2}Tb. No evidence for the expected 1{sup +} ground-state was found in the {sup 144}Ho decay data. The proton-emission from {sup 146}Tm was restudied. Five proton transitions were assigned to two proton-emitting states. The half-lives of 198(3) ms and 68(3) ms and the spin and parity values of I{sup {pi}}=10{sup +} and 5{sup -} were established for {sup 146m}Tm and {sup 146gs}Tm, respectively. For the first time for an odd-odd nucleus, the interpretation of the observed decay properties and structure of the proton-emitting states was made by accounting for deformation and proton and neutron coupling to the core excitations. A complex wave-function structure was obtained, with dominating components of {pi}h{sub 11/2}x{nu}h{sub 11/2} for the 10{sup +} isomer and {pi}h{sub 11/2}x{nu}s{sub 1/2} for the 5{sup -} ground state.
[General pharmacological study of iodixanol, a new non-ionic isotonic contrast medium].
Takasuna, K; Kasai, Y; Kitano, Y; Mori, K; Kobayashi, R; Makino, M; Hagiwara, T; Hirohashi, M; Nomura, M; Algate, D R
1995-10-01
The general pharmacological study of iodixanol, a non-ionic isotonic contrast medium, was conducted. 1) Iodixanol administered intravenously over a dose range of 320 to 3,200 mgI/kg had little or no effect on the general behavior, spontaneous locomotor activity, hexobarbital sleeping time, pain response, electroshock- or pentylenetetrazol-induced convulsion (mouse), EEG or body temperature (rabbit), gastrointestinal propulsion (mouse) or skeletal muscle contraction (rabbit). Iodixanol had no specific interaction with acetylcholine, histamine, serotonin, nicotin, BaCl2 (ileum), methacholine (trachea), isoprenaline (atrium) or oxytocin (pregnant uterus), nor had any effect on spontaneous contractility (atrium and uterus), or transmural electrostimulation-induced contractility (vas deferens) at concentrations of < or = 3.2 x 10(-3) gI/ml in vitro. Iodixanol had no effect on the cardiovascular system of dog, except that it increased femoral blood flow and respiratory rate at doses of > or = 1,000 mgI/kg. Iodixanol at 3,200 mgI/kg i.v. reduced urine output with a decrease in Na+ and Cl- excretion, whereas at 320 mgI/kg i.v., it slightly increased urine output (rat). 2) Injections of iodixanol into the cerebroventricular (0.96, 9.6 mgI/mouse and 3.2, 32 mgI/rat), left ventricular (1,920, 6,400 mgI/dog) or coronary artery (640, 1,920 mgI/dog) had no conspicuous effect on the central nervous system or the cardiovascular system, respectively. There was no marked difference among iodixanol, iohexol and iopamidol in this respect. Vascular pain during injection into the femoral artery (300-320 mgI/guinea pig) appeared to be less intense with iodixanol, compared with the other contrast media iohexol and iopamidol. These results suggest that intravenous injection of iodixanol is relatively free from pharmacological activity, and effects of iodixanol on the central nervous system (intracerebroventricular injection) and cardiovascular system (intra-left ventricular and -coronary
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
Vink, Margaretha A; Berkhof, Johannes; van de Kassteele, Jan; van Boven, Michiel; Bogaards, Johannes A
2016-01-01
Post-vaccine monitoring programs for human papillomavirus (HPV) have been introduced in many countries, but HPV serology is still an underutilized tool, partly owing to the weak antibody response to HPV infection. Changes in antibody levels among non-vaccinated individuals could be employed to monitor herd effects of immunization against HPV vaccine types 16 and 18, but inference requires an appropriate statistical model. The authors developed a four-component bivariate mixture model for jointly estimating vaccine-type seroprevalence from correlated antibody responses against HPV16 and -18 infections. This model takes account of the correlation between HPV16 and -18 antibody concentrations within subjects, caused e.g. by heterogeneity in exposure level and immune response. The model was fitted to HPV16 and -18 antibody concentrations as measured by a multiplex immunoassay in a large serological survey (3,875 females) carried out in the Netherlands in 2006/2007, before the introduction of mass immunization. Parameters were estimated by Bayesian analysis. We used the deviance information criterion for model selection; performance of the preferred model was assessed through simulation. Our analysis uncovered elevated antibody concentrations in doubly as compared to singly seropositive individuals, and a strong clustering of HPV16 and -18 seropositivity, particularly around the age of sexual debut. The bivariate model resulted in a more reliable classification of singly and doubly seropositive individuals than achieved by a combination of two univariate models, and suggested a higher pre-vaccine HPV16 seroprevalence than previously estimated. The bivariate mixture model provides valuable baseline estimates of vaccine-type seroprevalence and may prove useful in seroepidemiologic assessment of the herd effects of HPV vaccination. PMID:27537200
Vink, Margaretha A.; Berkhof, Johannes; van de Kassteele, Jan; van Boven, Michiel; Bogaards, Johannes A.
2016-01-01
Post-vaccine monitoring programs for human papillomavirus (HPV) have been introduced in many countries, but HPV serology is still an underutilized tool, partly owing to the weak antibody response to HPV infection. Changes in antibody levels among non-vaccinated individuals could be employed to monitor herd effects of immunization against HPV vaccine types 16 and 18, but inference requires an appropriate statistical model. The authors developed a four-component bivariate mixture model for jointly estimating vaccine-type seroprevalence from correlated antibody responses against HPV16 and -18 infections. This model takes account of the correlation between HPV16 and -18 antibody concentrations within subjects, caused e.g. by heterogeneity in exposure level and immune response. The model was fitted to HPV16 and -18 antibody concentrations as measured by a multiplex immunoassay in a large serological survey (3,875 females) carried out in the Netherlands in 2006/2007, before the introduction of mass immunization. Parameters were estimated by Bayesian analysis. We used the deviance information criterion for model selection; performance of the preferred model was assessed through simulation. Our analysis uncovered elevated antibody concentrations in doubly as compared to singly seropositive individuals, and a strong clustering of HPV16 and -18 seropositivity, particularly around the age of sexual debut. The bivariate model resulted in a more reliable classification of singly and doubly seropositive individuals than achieved by a combination of two univariate models, and suggested a higher pre-vaccine HPV16 seroprevalence than previously estimated. The bivariate mixture model provides valuable baseline estimates of vaccine-type seroprevalence and may prove useful in seroepidemiologic assessment of the herd effects of HPV vaccination. PMID:27537200
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data
Volcano clustering determination: Bivariate Gauss vs. Fisher kernels
NASA Astrophysics Data System (ADS)
Cañón-Tapia, Edgardo
2013-05-01
Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of
Chen, Zhengjia; Wang, Zhibo; Wang, Haibin; Owonikoko, Taofeek K; Kowalski, Jeanne; Khuri, Fadlo R
2013-01-01
Isotonic Design using Normalized Equivalent Toxicity Score (ID-NETS) is a novel Phase I design that integrates the novel toxicity scoring system originally proposed by Chen et al. [1] and the original Isotonic Design proposed by Leung et al. [2]. ID-NETS has substantially improved the accuracy of maximum tolerated dose (MTD) estimation and trial efficiency in the Phase I clinical trial setting by fully utilizing all toxicities experienced by each patient and treating toxicity response as a quasi-continuous variable instead of a binary indicator of dose limiting toxicity (DLT). To facilitate the incorporation of the ID-NETS method into the design and conduct of Phase I clinical trials, we have designed and developed a user-friendly software, ID-NETS(©TM), which has two functions: 1) Calculating the recommended dose for the subsequent patient cohort using available completed data; and 2) Performing simulations to obtain the operating characteristics of a trial designed with ID-NETS. Currently, ID-NETS(©TM)v1.0 is available for free download at http://winshipbbisr.emory.edu/IDNETS.html. PMID:23847695
NASA Technical Reports Server (NTRS)
Ellis, S.; Kirby, L. C.; Greenleaf, J. E.
1993-01-01
Muscle thickness was measured in 19 Bed-Rested (BR) men (32-42 year) subjected to IsoTonic (ITE, cycle orgometer) and IsoKi- netic (IKE, torque orgometer) lower extremity exercise training, and NO Exercise (NOE) training. Thickness was measured with ultrasonography in anterior thigh-Rectus Femoris (RF) and Vastus Intermadius (VI), and combined posterior log-soleus, flexor ballucis longus, and tibialis posterior (S + FHL +TP) - muscles. Compared with ambulatory control values, thickness of the (S + FHL + TP) decreased by 90%-12% (p less than 0.05) In all three test groups. The (RF) thickness was unchanged in the two exercise groups, but decreased by 10% (p less than 0.05) in the NOE. The (VI) thickness was unchanged In the ITE group, but decreased by 12%-l6% (p less than 0.05) in the IKE and NOE groups. Thus, intensive, alternating, isotonic cycle ergometer exercise training is as effective as intensive, intermittent, isokinetic exercise training for maintaining thicknesses of rectus femoris and vastus lntermedius anterior thigh muscles, but not posterior log muscles, during prolonged BR deconditioning.
Neelon, Brian; Anthopolos, Rebecca; Miranda, Marie Lynn
2014-04-01
Motivated by a study examining geographic variation in birth outcomes, we develop a spatial bivariate probit model for the joint analysis of preterm birth and low birth weight. The model uses a hierarchical structure to incorporate individual and areal-level information, as well as spatially dependent random effects for each spatial unit. Because rates of preterm birth and low birth weight are likely to be correlated within geographic regions, we model the spatial random effects via a bivariate conditionally autoregressive prior, which induces regional dependence between the outcomes and provides spatial smoothing and sharing of information across neighboring areas. Under this general framework, one can obtain region-specific joint, conditional, and marginal inferences of interest. We adopt a Bayesian modeling approach and develop a practical Markov chain Monte Carlo computational algorithm that relies primarily on easily sampled Gibbs steps. We illustrate the model using data from the 2007-2008 North Carolina Detailed Birth Record. PMID:22599322
Modeling two-vehicle crash severity by a bivariate generalized ordered probit approach.
Chiou, Yu-Chiun; Hwang, Cherng-Chwan; Chang, Chih-Chin; Fu, Chiang
2013-03-01
This study simultaneously models crash severity of both parties in two-vehicle accidents at signalized intersections in Taipei City, Taiwan, using a novel bivariate generalized ordered probit (BGOP) model. Estimation results show that the BGOP model performs better than the conventional bivariate ordered probit (BOP) model in terms of goodness-of-fit indices and prediction accuracy and provides a better approach to identify the factors contributing to different severity levels. According to estimated parameters in latent propensity functions and elasticity effects, several key risk factors are identified-driver type (age>65), vehicle type (motorcycle), violation type (alcohol use), intersection type (three-leg and multiple-leg), collision type (rear ended), and lighting conditions (night and night without illumination). Corresponding countermeasures for these risk factors are proposed. PMID:23246710
Reprint of "Modeling two-vehicle crash severity by a bivariate generalized ordered probit approach".
Chiou, Yu-Chiun; Hwang, Cherng-Chwan; Chang, Chih-Chin; Fu, Chiang
2013-12-01
This study simultaneously models crash severity of both parties in two-vehicle accidents at signalized intersections in Taipei City, Taiwan, using a novel bivariate generalized ordered probit (BGOP) model. Estimation results show that the BGOP model performs better than the conventional bivariate ordered probit (BOP) model in terms of goodness-of-fit indices and prediction accuracy and provides a better approach to identify the factors contributing to different severity levels. According to estimated parameters in latent propensity functions and elasticity effects, several key risk factors are identified-driver type (age>65), vehicle type (motorcycle), violation type (alcohol use), intersection type (three-leg and multiple-leg), collision type (rear ended), and lighting conditions (night and night without illumination). Corresponding countermeasures for these risk factors are proposed. PMID:23915470
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters. PMID:26372426
NASA Astrophysics Data System (ADS)
Huang, Xuan; An, Haizhong; Gao, Xiangyun; Hao, Xiaoqing; Liu, Pengpeng
2015-06-01
This study introduces an approach to study the multiscale transmission characteristics of the correlation modes between bivariate time series. The correlation between the bivariate time series fluctuates over time. The transmission among the correlation modes exhibits a multiscale phenomenon, which provides richer information. To investigate the multiscale transmission of the correlation modes, this paper describes a hybrid model integrating wavelet analysis and complex network theory to decompose and reconstruct the original bivariate time series into sequences in a joint time-frequency domain and defined the correlation modes at each time-frequency domain. We chose the crude oil spot and futures prices as the sample data. The empirical results indicate that the main duration of volatility (32-64 days) for the strongly positive correlation between the crude oil spot price and the futures price provides more useful information for investors. Moreover, the weighted degree, weighted indegree and weighted outdegree of the correlation modes follow power-law distributions. The correlation fluctuation strengthens the extent of persistence over the long term, whereas persistence weakens over the short and medium term. The primary correlation modes dominating the transmission process and the major intermediary modes in the transmission process are clustered both in the short and long term.
NASA Astrophysics Data System (ADS)
Metwally, Fadia H.
2008-02-01
The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.
Metwally, Fadia H
2008-02-01
The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12microgml(-1) of NIF and 2-8microgml(-1) of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method. PMID:17631041
A Statistical Analysis of Cotton Fiber Properties
NASA Astrophysics Data System (ADS)
Ghosh, Anindya; Das, Subhasis; Majumder, Asha
2016-04-01
This paper reports a statistical analysis of different cotton fiber properties, such as strength, breaking elongation, upper half mean length, length uniformity index, short fiber index, micronaire, reflectance and yellowness measured from 1200 cotton bales. The uni-variate, bi-variate and multi-variate statistical analysis have been invoked to elicit interrelationship between above-mentioned properties taking them up singularly, pairwise and multiple way, respectively. In multi-variate analysis all cotton fiber properties are simultaneously considered for multi-dimensional techniques of principal factor analysis.
Naftalin, R J; Tripathi, S
1986-01-01
Water movements have been studied in sheets of isolated rabbit ileum using a method which measures net volume flows across the mucosal and serosal surfaces of the tissue continuously with high resolution. At 35 degrees C, with the tissues incubated in isotonic Ringer solution containing D-glucose (25 mM) on both sides, there is a steady net inflow of fluid at the rate of 24 +/- 2 microliter cm-2 h-1 across the mucosal surface (Jm) and an outflow of 8 +/- 1 microliter cm-2 h-1 across the serosal surface (Js) (n = 16). The stable transepithelial p.d. across these tissues is 2.7 +/- 0.2 mV, serosa positive. Jm can be reversibly inhibited by anoxia. Ouabain (0.1 mM) added to the serosal solution inhibits inflow across the mucosal and serosal surfaces by 75% (n = 7) within 30 min. If phlorizin (0.1 mM) is added to the mucosal Ringer solution containing glucose (20 mM) within 30 min of the commencement of in vitro absorption, Jm is reduced from 37 +/- 3 to 28 +/- 2 microliter cm-2 h-1 (n = 3). Dilution of the mucosal Ringer solution by 50 mosmol kg-1 (with the serosal solution kept isosmolar) results in a rapid transient increase in mucosal inflow. An increase of 50 mosmol kg-1 in the mucosal Ringer solution with NaCl, sucrose or mannitol causes a transient reversal of mucosal flow, followed by a return of inflow at a reduced level. Rabbit ileum can transport water against gradients of approximately 75 mosmol kg-1 of sucrose, NaCl, or mannitol. Addition of polyethylene glycol (mol. wt. 20000; 3 mosmol kg-1) causes a sustained reversal of mucosal inflow; inflow can be restored only by removing polyethylene glycol from the mucosal Ringer solution. The tissue can absorb water against an osmotic gradient of 200 mM-glycerol. The above data have been incorporated into a new model to explain isotonic flow of fluid by this epithelium. The main features are that the hydraulic conductivity (Lp) of the mucosal boundary of the lateral intercellular space is approximately 1 X 10
Yamano, Hiro-o; Matsushita, Hiro-o; Yoshikawa, Kenjiro; Takagi, Ryo; Harada, Eiji; Tanaka, Yoshihito; Nakaoka, Michiko; Himori, Ryogo; Yoshida, Yuko; Satou, Kentarou; Imai, Yasushi
2016-01-01
Objectives Bowel cleansing is necessary before colonoscopy, but is a burden to patients because of the long cleansing time and large dose volume. A low-volume (2 L) hypertonic polyethylene glycol-ascorbic acid solution (PEG-Asc) has been introduced, but its possible dehydration effects have not been quantitatively studied. We compared the efficacy and safety including the dehydration risk between hypertonic PEG-Asc and isotonic PEG regimens. Design This was an observer-blinded randomised study. Participants (n=310) were allocated to receive 1 of 3 regimens on the day of colonoscopy: PEG-Asc (1.5 L) and water (0.75 L) dosed with 1 split (PEG-Asc-S) or 4 splits (PEG-Asc-M), or PEG-electrolyte solution (PEG-ES; 2.25 L) dosed with no split. Dehydration was analysed by measuring haematocrit (Ht). Results The cleansing time using the hypertonic PEG-Asc-S (3.33±0.48 hours) was significantly longer than that with isotonic PEG-ES (3.05±0.56 hours; p<0.001). PEG-Asc-M (3.00±0.53 hours) did not have this same disadvantage. Successful cleansing was achieved in more than 94% of participants using each of the 3 regimens. The percentage changes in Ht from baseline (before dosing) to the end of dosing with PEG-Asc-S (3.53±3.32%) and PEG-Asc-M (4.11±3.07%) were significantly greater than that with PEG-ES (1.31±3.01%). Conclusions These 3 lower volume regimens were efficacious and had no serious adverse effects. Even patients cleansed with isotonic PEG-ES showed significant physiological dehydration at the end of dosing. The four-split PEG-Asc-M regimen is recommended because of its shorter cleansing time without causing serious nausea. Trial registration number UMIN000013103; Results. PMID:27547443
Majumdar, Anandamayee; Gries, Corinna
2010-01-01
Lately, bivariate zero-inflated (BZI) regression models have been used in many instances in the medical sciences to model excess zeros. Examples include the BZI Poisson (BZIP), BZI negative binomial (BZINB) models, etc. Such formulations vary in the basic modeling aspect and use the EM algorithm (Dempster, Laird and Rubin, 1977) for parameter estimation. A different modeling formulation in the Bayesian context is given by Dagne (2004). We extend the modeling to a more general setting for multivariate ZIP models for count data with excess zeros as proposed by Li, Lu, Park, Kim, Brinkley and Peterson (1999), focusing on a particular bivariate regression formulation. For the basic formulation in the case of bivariate data, we assume that Xi are (latent) independent Poisson random variables with parameters λ i, i = 0, 1, 2. A bi-variate count vector (Y1, Y2) response follows a mixture of four distributions; p0 stands for the mixing probability of a point mass distribution at (0, 0); p1, the mixing probability that Y2 = 0, while Y1 = X0 + X1; p2, the mixing probability that Y1 = 0 while Y2 = X0 + X2; and finally (1 - p0 - p1 - p2), the mixing probability that Yi = Xi + X0, i = 1, 2. The choice of the parameters {pi, λ i, i = 0, 1, 2} ensures that the marginal distributions of Yi are zero inflated Poisson (λ 0 + λ i). All the parameters thus introduced are allowed to depend on co-variates through canonical link generalized linear models (McCullagh and Nelder, 1989). This flexibility allows for a range of real-life applications, especially in the medical and biological fields, where the counts are bivariate in nature (with strong association between the processes) and where there are excess of zeros in one or both processes. Our contribution in this paper is to employ a fully Bayesian approach consolidating the work of Dagne (2004) and Li et al. (1999) generalizing the modeling and sampling-based methods described by Ghosh, Mukhopadhyay and Lu (2006) to estimate the
NASA Astrophysics Data System (ADS)
Sohn, Soo-Jin; Tam, Chi-Yung
2016-05-01
Capturing climatic variations in boreal winter to spring (December-May) is essential for properly predicting droughts in South Korea. This study investigates the variability and predictability of the South Korean climate during this extended season, based on observations from 60 station locations and multi-model ensemble (MME) hindcast experiments (1983/1984-2005/2006) archived at the APEC Climate Center (APCC). Multivariate empirical orthogonal function (EOF) analysis results based on observations show that the first two leading modes of winter-to-spring precipitation and temperature variability, which together account for ~80 % of the total variance, are characterized by regional-scale anomalies covering the whole South Korean territory. These modes were also closely related to some of the recurrent large-scale circulation changes in the northern hemisphere during the same season. Consistent with the above, examination of the standardized precipitation evapotranspiration index (SPEI) indicates that drought conditions in South Korea tend to be accompanied by regional-to-continental-scale circulation anomalies over East Asia to the western north Pacific. Motivated by the aforementioned findings on the spatial-temporal coherence among station-scale precipitation and temperature anomalies, a new bivariate and pattern-based downscaling method was developed. The novelty of this method is that precipitation and temperature data were first filtered using multivariate EOFs to enhance their spatial-temporal coherence, before being linked to large-scale circulation variables using canonical correlation analysis (CCA). To test its applicability and to investigate its related potential predictability, a perfect empirical model was first constructed with observed datasets as predictors. Next, a model output statistics (MOS)-type hybrid dynamical-statistical model was developed, using products from nine one-tier climate models as inputs. It was found that, with model sea
NASA Astrophysics Data System (ADS)
Sohn, Soo-Jin; Tam, Chi-Yung
2015-07-01
Capturing climatic variations in boreal winter to spring (December-May) is essential for properly predicting droughts in South Korea. This study investigates the variability and predictability of the South Korean climate during this extended season, based on observations from 60 station locations and multi-model ensemble (MME) hindcast experiments (1983/1984-2005/2006) archived at the APEC Climate Center (APCC). Multivariate empirical orthogonal function (EOF) analysis results based on observations show that the first two leading modes of winter-to-spring precipitation and temperature variability, which together account for ~80 % of the total variance, are characterized by regional-scale anomalies covering the whole South Korean territory. These modes were also closely related to some of the recurrent large-scale circulation changes in the northern hemisphere during the same season. Consistent with the above, examination of the standardized precipitation evapotranspiration index (SPEI) indicates that drought conditions in South Korea tend to be accompanied by regional-to-continental-scale circulation anomalies over East Asia to the western north Pacific. Motivated by the aforementioned findings on the spatial-temporal coherence among station-scale precipitation and temperature anomalies, a new bivariate and pattern-based downscaling method was developed. The novelty of this method is that precipitation and temperature data were first filtered using multivariate EOFs to enhance their spatial-temporal coherence, before being linked to large-scale circulation variables using canonical correlation analysis (CCA). To test its applicability and to investigate its related potential predictability, a perfect empirical model was first constructed with observed datasets as predictors. Next, a model output statistics (MOS)-type hybrid dynamical-statistical model was developed, using products from nine one-tier climate models as inputs. It was found that, with model sea
Gils, H.J.; Rebel, H.; Friedman, E.
1984-04-01
The elastic scattering of 104 MeV ..cap alpha.. particles by /sup 40,42,43,44,48/Ca, /sup 50/Ti, /sup 51/V, and /sup 52/Cr has been analyzed by phenomenological and semimicroscopic optical potentials in order to get information on isotopic and isotonic differences of the ..cap alpha.. particle optical potentials and of nuclear matter densities. The phenomenological optical potentials based on a Fourier-Bessel description of the real part reveal different behavior in size and shape for the isotonic chain as compared to the isotopic chain. Odd-even effects are also indicated to be different for isotones and isotopes. The semimicroscopic analyses use a single-folding model with a density-dependent effective ..cap alpha..N interaction including a realistic local density approximation. The calculated potentials are fully consistent with the phenomenological ones. Isotopic and isotonic differences of the nuclear matter densities obtained from the folding model in general show a similar behavior as the optical potential differences. The results on matter densities are compared to other investigations.
Wang, Lin; Wang, Xiaohan; Li, Xiaozhou
2014-07-20
Hydrosoluble emodin-borate (EmB) nanoparticles (NPs) were fabricated by a simple solvent exchange method to address emodin's poor water solubility. As the result, negative charges were introduced in the surface of EmB NPs. In addition, layer-by-layer assembled multilayer films containing cation-rich polymeric microgels (named PAHD) and sodium carboxymethyl cellulose (NaCMC) were employed as drug carrier. Anionic EmB can be loaded into the PAHD/CMC multilayer films. The influences of various experimental parameters on cargo capacity of the PAHD/CMC film were studied in detail. The loaded EmB can be released in the form of emodin molecule in presence of isotonic sodium bicarbonate (ISB) solution. Gratifyingly, EmB did not almost release in presence of water, PBS buffer solution, 0.9% normal saline, and 5% glucose solution. PMID:24755249
NASA Astrophysics Data System (ADS)
Roshanzamir-Nikou, M.; Goudarzi, H.
2016-02-01
A strong magnetic field significantly affects the intrinsic magnetic moment of fermions. In quantum electrodynamics, it was shown that the anomalous magnetic moment of an electron arises kinematically, while it results from a dynamical interaction with an external magnetic field for hadrons (proton). Taking the anomalous magnetic moment of a fermion into account, we find an exact expression for the boundstate energy and the corresponding eigenfunctions of a two-dimensional nonrelativistic spin-1/2 harmonic oscillator with a centripetal barrier (known as the isotonic oscillator) including an Aharonov-Bohm term in the presence of a strong magnetic field. We use the Laplace transform method in the calculations. We find that the singular solution contributes to the phase of the wave function at the origin and the phase depends on the spin and magnetic flux.
NASA Technical Reports Server (NTRS)
Fitts, R. H.; Romatowski, J. G.; Blaser, C.; De La Cruz, L.; Gettelman, G. J.; Widrick, J. J.
2000-01-01
Experiments from both Cosmos and Space Shuttle missions have shown weightlessness to result in a rapid decline in the mass and force of rat hindlimb extensor muscles. Additionally, despite an increased maximal shortening velocity, peak power was reduced in rat soleus muscle post-flight. In humans, declines in voluntary peak isometric ankle extensor torque ranging from 15-40% have been reported following long- and short-term spaceflight and prolonged bed rest. Complete understanding of the cellular events responsible for the fiber atrophy and the decline in force, as well as the development of effective countermeasures, will require detailed knowledge of how the physiological and biochemical processes of muscle function are altered by spaceflight. The specific purpose of this investigation was to determine the extent to which the isotonic contractile properties of the slow- and fast-twitch fiber types of the soleus and gastrocnemius muscles of rhesus monkeys (Macaca mulatta) were altered by a 14-day spaceflight.
NASA Astrophysics Data System (ADS)
Dung, N. V.; Merz, B.; Bárdossy, A.; Apel, H.
2013-02-01
In this paper we present a novel approach for flood hazard analysis of the whole Mekong Delta with a particular focus on the Vietnamese part. Based on previous studies identifying the flood regime in the Mekong delta as non-stationary (Delgado et al., 2010), we develop a non-stationary approach for flood hazard analysis. Moreover, the approach is also bi-variate, as the flood severity in the Mekong Delta is determined by both maximum discharge and flood volume, which determines the flood duration. Probabilities of occurrences of peak discharge and flood volume are estimated by a copula. The flood discharges and volumes are used to derive synthetic hydrographs, which in turn constitute the upper boundary condition for a large-scale hydrodynamic model covering the whole Mekong Delta. The hydrodynamic model transforms the hydrographs into hazard maps. In addition, we extrapolate the observed trends in flood peak and volume and their associated non-stationary extreme value distributions to the year 2030 in order to give a flood hazard estimate for the near future. The uncertainty of extreme flood events in terms of different possible combinations of peak discharge and flood volume given by the copula is considered. Also, the uncertainty in flood hydrograph shape is combined with parameter uncertainty of the hydrodynamic model in a Monte Carlo framework yielding uncertainty estimates in terms of quantile flood maps. The proposed methodology sets the frame for the development of probabilistic flood hazard maps for the entire Mekong Delta. The combination of bi-variate, non-stationary extreme value statistics with large-scale flood inundation modeling and uncertainty quantification is novel in itself. Moreover, it is in particular novel for the Mekong Delta: a region where not even a standard hazard analysis based on a univariate, stationary extreme value statistic exists.
Investigation of the Non-Symmetrical Dependence of Precipitation using Empirical Bivariate Copulas
NASA Astrophysics Data System (ADS)
Suroso, Suroso; Bárdossy, András
2015-04-01
Precipitation plays important role in hydrological analysis. Some common precipitation models are developed based on the assumption of the symmetrical Gaussian dependence structure. This study tries to examine the asymmetrical spatial dependence of precipitation using empirical bivariate copulas. Empirical bivariate copulas are constructed from all possible pairwise combination of the rain gauge data at different locations located in Singapore and Germany. In addition, concept of regionalized variables in spatial random process is also applied with given separating distance. For any selected time interval, precipitation over the region of interest is assumed to be a realization of spatially stationary random process. In order to take into account temporal characteristics, precipitation with different time scales (hourly, 2-hours, 3-hours, 4-hours, 6-hours, 12-hours, daily, 5-days, 10-days, 15-days, monthly, quarterly) and different seasons are analyzed. The behavior of correlation functions are elaborated considering zero precipitation treated as censored values. Asymmetrical spatial dependence is measured by calculating integration from empirical bivariate copula density in the upper right and the lower left parts for given some thresholds, and then by their comparisons. Similarly, zero precipitation is handled as latent variables, and the thresholds are therefore taken percentiles bigger than probability of zeroes. Gaussian simulation based testing is adopted for counting its degree of uncertainty. Empirical evidences prove that precipitation correlation decrease along with the length of distance interval and increases with the length of time interval. There is an interesting systematic pattern relating to the domination of positive non-symmetrical spatial dependence in comparison to negative and symmetrical dependence in terms of distance and time interval. Number of pairs of rain gauge stations which has positive dependence is clearly seen the biggest for cases
Bivariate Genome-Wide Association Analysis of the Growth and Intake Components of Feed Efficiency
Beever, Jonathan E.; Bollero, Germán A.; Southey, Bruce R.; Faulkner, Daniel B.; Rodriguez-Zas, Sandra L.
2013-01-01
Single nucleotide polymorphisms (SNPs) associated with average daily gain (ADG) and dry matter intake (DMI), two major components of feed efficiency in cattle, were identified in a genome-wide association study (GWAS). Uni- and multi-SNP models were used to describe feed efficiency in a training data set and the results were confirmed in a validation data set. Results from the univariate and bivariate analyses of ADG and DMI, adjusted by the feedlot beef steer maintenance requirements, were compared. The bivariate uni-SNP analysis identified (P-value <0.0001) 11 SNPs, meanwhile the univariate analyses of ADG and DMI identified 8 and 9 SNPs, respectively. Among the six SNPs confirmed in the validation data set, five SNPs were mapped to KDELC2, PHOX2A, and TMEM40. Findings from the uni-SNP models were used to develop highly accurate predictive multi-SNP models in the training data set. Despite the substantially smaller size of the validation data set, the training multi-SNP models had slightly lower predictive ability when applied to the validation data set. Six Gene Ontology molecular functions related to ion transport activity were enriched (P-value <0.001) among the genes associated with the detected SNPs. The findings from this study demonstrate the complementary value of the uni- and multi-SNP models, and univariate and bivariate GWAS analyses. The identified SNPs can be used for genome-enabled improvement of feed efficiency in feedlot beef cattle, and can aid in the design of empirical studies to further confirm the associations. PMID:24205251
PCR Diagnosis of Pneumocystis Pneumonia: a Bivariate Meta-Analysis ▿
Lu, Yuan; Ling, Guoya; Qiang, Chenyi; Ming, Qinshou; Wu, Cong; Wang, Ke; Ying, Zouxiao
2011-01-01
We undertook a bivariate meta-analysis to assess the overall accuracy of respiratory specimen PCR assays for diagnosing Pneumocystis pneumonia. The summary sensitivity and specificity were 0.99 (95% confidence interval, 0.96 to 1.00) and 0.90 (0.87 to 0.93). Subgroup analyses showed that quantitative PCR analysis and the major surface glycoprotein gene target had the highest specificity value (0.93). Respiratory specimen PCR results are sufficient to confirm or exclude the disease for at-risk patients suspected of having Pneumocystis pneumonia. PMID:22012008
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Dogu, Beril; Sirzai, Hulya; Yilmaz, Figen; Polat, Basak; Kuran, Banu
2013-10-01
The primary objective of our study was to evaluate the effect of 6-week-long isotonic and isometric hand exercises on pain, hand functions, dexterity and quality of life in women diagnosed as rheumatoid arthritis (RA). Our secondary objective was to assess the changes in handgrip strength and disease activity. This randomized, parallel, single-blinded 6-week intervention study enrolled 52 female patients between 40 and 70 years of age, who were diagnosed with RA according to American College of Rheumatology criteria, had disease duration of at least 1 year and had a stage 1-3 disease according to Steinbrocker's functional evaluation scale. Patients were randomized into isotonics and isometrics groups. Exercises were performed on sixth week. All patients were applied wax therapy in the first 2 weeks. Their pain was assessed with visual analog scale (VAS), their hand functions with Duruöz Hand Index (DHI), dexterity with nine hole peg test (NHPT) and quality of life with Rheumatoid Arthritis Quality of Life questionnaire (RAQoL). Dominant and non-dominant handgrip strengths (HS) were measured. Disease activity was determined by disease activity score (DAS 28). We evaluated the difference in the above parameters between baseline and 6 weeks by Wilcoxon paired t test. The study was completed with 47 patients (isotonics n = 23; isometrics n = 24). VAS, DHI, NHPT, and RAQoL scores significantly improved in both groups by the end of 6th week compared to the baseline scores of the study (for isotonics p = 0.036, p = 0.002; p = 0.0001, p = 0.003; for isometrics p = 0.021, p = 0.002, p = 0.005, p = 0.01, respectively). DAS 28 scores decreased in both exercise groups (p = 0.002; p = 0.0001, respectively), while isometrics showed a significant increase in dominant HS (p = 0.029), and isotonics showed a significant increase in non-dominant HS (p = 0.013). This study showed that isometric and isotonic hand exercises decrease pain and disease activity and improve hand functions
Application of a Bivariate Gamma Distribution for a Chemically Reacting Plume in the Atmosphere
NASA Astrophysics Data System (ADS)
Ferrero, Enrico; Mortarini, Luca; Alessandrini, Stefano; Lacagnina, Carlo
2013-04-01
The joint concentration probability density function of two reactive chemical species is modelled using a bivariate Gamma distribution coupled with a three-dimensional fluctuating plume model able to simulate the diffusion and mixing of turbulent plumes. A wind-tunnel experiment (Brown and Bilger, J Fluid Mech 312:373-407, 1996), carried out in homogeneous unbounded turbulence, in which nitrogen oxide is released from a point source in an ozone doped background and the chemical reactions take place in non-equilibrium conditions, is considered as a test case. The model is based on a stochastic Langevin equation reproducing the barycentre position distribution through a proper low-pass filter for the turbulence length scales. While the meandering large-scale motion of the plume is directly simulated, the internal mixing relative to the centroid is reproduced using a bivariate Gamma density function. The effect of turbulence on the chemical reaction (segregation), which in this case has not yet attained equilibrium, is directly evaluated through the covariance of the tracer concentration fields. The computed mean concentrations and the O3-NO concentration covariance are also compared with those obtained by the Alessandrini and Ferrero Lagrangian single particle model (Alessandrini and Ferrero, Physica A 388:1375-1387, 2009) that entails an ad hoc parametrization for the segregation coefficient.
Bützler, Jennifer; Vetter, Sebastian; Jochems, Nicole; Schlick, Christopher M
2012-01-01
On the basis of three empirical studies Fitts' Law was refined for bivariate pointing tasks on large touch screens. In the first study different target width parameters were investigated. The second study considered the effect of the motion angle. Based on the results of the two studies a refined model for movement time in human-computer interaction was formulated. A third study, which is described here in detail, concerns the validation of the refined model. For the validation study 20 subjects had to execute a bivariate pointing task on a large touch screen. In the experimental task 250 rectangular target objects were displayed at a randomly chosen position on the screen covering a broad range of ID values (ID= [1.01; 4.88]). Compared to existing refinements of Fitts' Law, the new model shows highest predictive validity. A promising field of application of the model is the ergonomic design and evaluation of project management software. By using the refined model, software designers can calculate a priori the appropriate angular position and the size of buttons, menus or icons. PMID:22317256
Das, Kiranmoy; Daniels, Michael J
2014-03-01
Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline. PMID:24400941
McCabe, E.R.B.; Towbin, J.A. ); Engh, G. van den; Trask, B.J. )
1992-12-01
Bivariate flow karyotyping was used to estimate the deletion sizes for a series of patients with Xp21 contiguous gene syndromes. The deletion estimates were used to develop an approximate scale for the genomic map in Xp21. The bivariate flow karyotype results were compared with clinical and molecular genetic information on the extent of the patients' deletions, and these various types of data were consistent. The resulting map spans >15 Mb, from the telomeric interval between DXS41 (99-6) and DXS68 (1-4) to a position centromeric to the ornithine transcarbamylase locus. The deletion sizing was considered to be accurate to [plus minus]1 Mb. The map provides information on the relative localization of genes and markers within this region. For example, the map suggests that the adrenal hypoplasia congenita and glycerol kinase genes are physically close to each other, are within 1-2 Mb of the telomeric end of the Duchenne muscular dystrophy (DMD) gene, and are nearer to the DMD locus than to the more distal marker DXS28 (C7). Information of this type is useful in developing genomic strategies for positional cloning in Xp21. These investigations demonstrate that the DNA from patients with Xp21 contiguous gene syndromes can be valuable reagents, not only for ordering loci and markers but also for providing an approximate scale to the map of the Xp21 region surrounding DMD. 44 refs., 3 figs.
Kipnis, Victor; Freedman, Laurence S.; Carroll, Raymond J.; Midthune, Douglas
2015-01-01
SUMMARY Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This study is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often “energy-adjusted”, e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable) and energy (a continuous variable) simultaneously in a bivariate model. In this paper, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119–1125), and also evaluate its performance in a simulation study. PMID:26332011
Pamidimukkala, Jaya; Jandhyala, Bhagavan S
2004-01-01
Obese Zucker rats (OZR) are hyperinsulenemic, hyperglycemic and dyslipidemic and develop salt dependent hypertension. Since salt sensitivity is considered to be due to impaired handling of renal sodium excretion, these studies were conducted in the obese and lean Zucker rats (LZR) anesthetized with Inactin to evaluate renal function under basal conditions and during acute isotonic fluid volume expansion (VE). Mean Arterial blood pressure (MBP), heart rate (HR), renal blood flow(RBF) and glomerular filtration rate (GFR) were not significantly different between the lean Zucker rats fed normal diet or that fed salt rich diet(8% NaCI). However, basal UV and UNaV were significantly greater in the LZR fed high salt. During VE essentially identical increases occurred in GFR, UV and UNaV in both the lean groups. In the OZR fed salt rich diet also, there were no significant changes in the heart rate, RBF and GFR. However, arterial blood pressure of the OZR fed salt rich diet was significantly greater than that of the OZR on the normal diet as well as that of both the lean groups. Also, as in the LZR, basal UV and UNaV were significantly greater in the salt fed obese rats. During volume expansion there were no impairments in the ability of the obese groups fed normal or salt rich diet to eliminate sodium and water during volume load. In fact, the net sodium and water excretions during and 60 min after VE in both the obese groups were significantly greater than that of corresponding lean groups. Furthermore, these values in the OZR fed salt rich diet were significantly greater than that of the obese rats on normal salt diet perhaps due to the contribution of pressure natriuretic mechanisms'. These data demonstrate that although OZR are salt sensitive, the renal mechanisms that would collectively respond to acute isotonic VE were fully functional. An unexpected and a novel finding in these studies is that the salt rich diet, in addition to increasing arterial blood pressure
Cosmic statistics of statistics
NASA Astrophysics Data System (ADS)
Szapudi, István; Colombi, Stéphane; Bernardeau, Francis
1999-12-01
The errors on statistics measured in finite galaxy catalogues are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly non-linear to weakly non-linear scales. For non-linear functions of unbiased estimators, such as the cumulants, the phenomenon of cosmic bias is identified and computed. Since it is subdued by the cosmic errors in the range of applicability of the theory, correction for it is inconsequential. In addition, the method of Colombi, Szapudi & Szalay concerning sampling effects is generalized, adapting the theory for inhomogeneous galaxy catalogues. While previous work focused on the variance only, the present article calculates the cross-correlations between moments and connected moments as well for a statistically complete description. The final analytic formulae representing the full theory are explicit but somewhat complicated. Therefore we have made available a fortran program capable of calculating the described quantities numerically (for further details e-mail SC at colombi@iap.fr). An important special case is the evaluation of the errors on the two-point correlation function, for which this should be more accurate than any method put forward previously. This tool will be immensely useful in the future for assessing the precision of measurements from existing catalogues, as well as aiding the design of new galaxy surveys. To illustrate the applicability of the results and to explore the numerical aspects of the theory qualitatively and quantitatively, the errors and cross-correlations are predicted under a wide range of assumptions for the future Sloan Digital Sky Survey. The principal results concerning the cumulants ξ, Q3 and Q4 is that
NASA Astrophysics Data System (ADS)
Speich, Matthias J. R.; Bernhard, Luzi; Teuling, Adriaan J.; Zappa, Massimiliano
2015-04-01
Hydrological classification schemes are important tools for assessing the impacts of a changing climate on the hydrology of a region. In this paper, we present bivariate mapping as a simple means of classifying hydrological data for a quantitative and qualitative assessment of temporal change. Bivariate mapping consists of classifying map objects into discrete classes based on the values of two variables. We demonstrate the application of bivariate mapping to distributed hydro-climatic model outputs for the whole of Switzerland with a cell size of 200 m and compared the resulting bivariate maps with an existing classification of Swiss river regimes. The effects of scale were investigated by comparing these raster maps with a map showing the same variables aggregated to sub-basins with a mean area of 40 km2. Finally, maps of the current state were compared with predictions for future periods based on various model chains and greenhouse gas emission scenarios. For the map comparisons, four measures of association and two measures of agreement were used. Of all the variable pairs tested, a bivariate map combining runoff and snowmelt contribution to runoff obtained the highest similarity scores with the map of river regimes, which suggests a strong link between the combination of these variables and intra-annual streamflow variations. Also, this classification offers new insights, as it includes absolute values of runoff, which are often ignored in classification schemes. Comparing current-state maps with future predictions indicated that the magnitude of change is reflected in the patterns of bivariate maps, with lower agreement scores for predictions further away in time or when higher greenhouse gas emissions are assumed. Furthermore, a visualization of the spatial distribution of agreement scores allows a qualitative assessment of the magnitude of change for different regions, and an analysis of the differences in spatial patterns of predictions based on different
Reinking, Mark F.; Rauh, Mitchell J.
2012-01-01
Purpose: The purpose of this study was to examine the relationships between isotonic ankle plantar flexor endurance (PFE), foot pronation as measured by navicular drop, and exercise-related leg pain (ERLP). Background: Exercise-related leg pain is a common occurrence in competitive and recreational runners. The identification of factors contributing to the development of ERLP may help guide methods for the prevention and management of overuse injuries. Methods: Seventy-seven (44 males, 33 females) competitive runners from five collegiate cross-country (XC) teams consented to participate in the study. Isotonic ankle PFE and foot pronation were measured using the standing heel-rise and navicular drop (ND) tests, respectively. Demographic information, anthropometric measurements, and ERLP history were also recorded. Subjects were then prospectively tracked for occurrence of ERLP during the 2009 intercollegiate cross-country season. Multivariate logistic regression analysis was used to examine the relationships between isotonic ankle joint PFE and ND and the occurrence of ERLP. Results: While no significant differences were identified for isotonic ankle PFE between groups of collegiate XC runners with and without ERLP, runners with a ND >10 mm were almost 7 times (OR=6.6, 95% CI=1.2–38.0) more likely to incur medial ERLP than runners with ND <10 mm. Runners with a history of ERLP in the month previous to the start of the XC season were 12 times (OR=12.3, 95% CI=3.1–48.9) more likely to develop an in-season occurrence of ERLP. Conclusion: While PFE did not appear to be a risk factor in the development of ERLP in this group of collegiate XC runners, those with a ND greater than 10 mm may be at greater odds of incurring medial ERLP. Level of Evidence: 2b. PMID:22666641
Bayesian neural networks for bivariate binary data: an application to prostate cancer study.
Chakraborty, Sounak; Ghosh, Malay; Maiti, Tapabrata; Tewari, Ashutosh
2005-12-15
Prostate cancer is one of the most common cancers in American men. The cancer could either be locally confined, or it could spread outside the organ. When locally confined, there are several options for treating and curing this disease. Otherwise, surgery is the only option, and in extreme cases of outside spread, it could very easily recur within a short time even after surgery and subsequent radiation therapy. Hence, it is important to know, based on pre-surgery biopsy results how likely the cancer is organ-confined or not. The paper considers a hierarchical Bayesian neural network approach for posterior prediction probabilities of certain features indicative of non-organ confined prostate cancer. In particular, we find such probabilities for margin positivity (MP) and seminal vesicle (SV) positivity jointly. The available training set consists of bivariate binary outcomes indicating the presence or absence of the two. In addition, we have certain covariates such as prostate specific antigen (PSA), gleason score and the indicator for the cancer to be unilateral or bilateral (i.e. spread on one or both sides) in one data set and gene expression microarrays in another data set. We take a hierarchical Bayesian neural network approach to find the posterior prediction probabilities for a test and validation set, and compare these with the actual outcomes for the first data set. In case of the microarray data we use leave one out cross-validation to access the accuracy of our method. We also demonstrate the superiority of our method to the other competing methods through a simulation study. The Bayesian procedure is implemented by an application of the Markov chain Monte Carlo numerical integration technique. For the problem at hand, our Bayesian bivariate neural network procedure is shown to be superior to the classical neural network, Radford Neal's Bayesian neural network as well as bivariate logistic models to predict jointly the MP and SV in a patient in both the
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Bernauer, E. M.; Ertl, A. C.; Bulbulian, R.; Bond, M.
1994-01-01
The purpose of our study was to determine if an intensive, intermittent, isokinetic, lower extremity exercise training program would attenuate or eliminate the decrease of muscular strength and endurance during prolonged bed rest (BR). The 19 male subjects (36 +/- 1 yr, 178 +/- 2 cm, 76.5 +/- 1.7 kg) were allocated into a no exercise (NOE) training group (N = 5), an isotonic (lower extremity cycle ergometer) exercise (ITE) training group (N = 7), and an isokinetic (isokinetic knee flexion-extension) exercise (IKE) training group (N = 7). Peak knee (flexion and extension) and shoulder (abduction-adduction) functions were measured weekly in all groups with one 5-repetition set. After BR, average knee extension total work decreased by 16% with NOE, increased by 27% with IKE, and was unchanged with ITE. Average knee flexion total work and peak torque (strength) responses were unchanged in all groups. Force production increased by 20% with IKE and was unchanged with NOE and ITE. Shoulder total work was unchanged in all groups, while gross average peak torque increased by 27% with ITE and by 22% with IKE, and was unchanged with NOE. Thus, while ITE training can maintain some isokinetic functions during BR, maximal intermittent IKE training can increase other functions above pre-BR control levels.
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Bernauer, E. M.; Ertl, A. C.; Bond, M.; Bulbulian, R.
1994-01-01
The purpose of our study was to determine if an intensive, intermittent, isokinetic, lower extremity exercise training program would attenuate or eliminate the decrease of muscular strength and endurance during prolonged bed rest (BR). The 19 male subjects (36 +/- 1 yr, 178 +/- 2 cm, 76.5 +/- 1.7 kg) were allocated into a no exercise (NOE) training group (N = 5), an isotonic (lower extremity cycle orgometer) exercise (ITE) training group (N = 7), and an isokinetic (isokinetic knee flexion-extension) exercise (IKE) training group (N = 7). Peak knee (flexion and extension) and shoulder (abduction-adduction) functions were measured weekly in all groups with one 5-repetition set. After BR, average knee extension total work decreased by 16% with NOE, increased by 27% with IKE, and was unchanged with ITE. Average knee flexion total work and peak torque (strength) responses were unchanged in all groups. Force production increased by 20% with IKE and was unchanged with NOE and ITE. Shoulder total work was unchanged in all groups, while gross average peak torque increased by 27% with ITE and by 22% with IKE, and was unchanged with NOE. Thus, while ITE training can maintain some isokinetic functions during BR, maximal intermittent IKE training can increase other functions above pre-BR control levels.
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Starr, J. C.; Van Beaumont, W.; Convertino, V. A.
1983-01-01
Measurements of maximal grip strength and endurance at 40 percent max strength were obtained for 7 men 19-21 years of age, 1-2 days before and on the first recovery day during three 2-week bedrest (BR) periods, each separated by a 3-week ambulatory recovery period. The subjects performed isometric exercise (IME) for 1 hr/day, isotonic exercise (ITE) for 1 hr/day, and no exercise (NOE) in the three BR periods. It was found that the mean maximal grip strength was unchanged after all three BR periods. Mean grip endurance was found to be unchanged after IME and ITE training, but was significantly reduced after NOE. These results indicate that IME and ITE training during BR do not increase or decrease maximal grip strength, alghough they prevent loss of grip endurance, while the maximal strength of all other major muscle groups decreases in proportion to the length of BR to 70 days. The maximal strength reduction of the large muscle groups was found to be about twice that of the small muscle groups during BR. In addition, it is shown that changes in maximal strength after spaceflight, BR, or water immersion deconditioning cannot be predicted from changes in submaximal or maximal oxygen uptake values.
Oh, Won Sup; Chon, Sung-Bin
2016-05-01
Fluid resuscitation, hemostasis, and transfusion is essential in care of hemorrhagic shock. Although estimation of the residual blood volume is crucial, the standard measuring methods are impractical or unsafe. Vital signs, central venous or pulmonary artery pressures are inaccurate. We hypothesized that the residual blood volume for acute, non-ongoing hemorrhage was calculable using serial hematocrit measurements and the volume of isotonic solution infused. Blood volume is the sum of volumes of red blood cells and plasma. For acute, non-ongoing hemorrhage, red blood cell volume would not change. A certain portion of the isotonic fluid would increase plasma volume. Mathematically, we suggest that the residual blood volume after acute, non-ongoing hemorrhage might be calculated as 0·25N/[(Hct1/Hct2)-1], where Hct1 and Hct2 are the initial and subsequent hematocrits, respectively, and N is the volume of isotonic solution infused. In vivo validation and modification is needed before clinical application of this model. PMID:27134507
2016-01-01
Fluid resuscitation, hemostasis, and transfusion is essential in care of hemorrhagic shock. Although estimation of the residual blood volume is crucial, the standard measuring methods are impractical or unsafe. Vital signs, central venous or pulmonary artery pressures are inaccurate. We hypothesized that the residual blood volume for acute, non-ongoing hemorrhage was calculable using serial hematocrit measurements and the volume of isotonic solution infused. Blood volume is the sum of volumes of red blood cells and plasma. For acute, non-ongoing hemorrhage, red blood cell volume would not change. A certain portion of the isotonic fluid would increase plasma volume. Mathematically, we suggest that the residual blood volume after acute, non-ongoing hemorrhage might be calculated as 0·25N/[(Hct1/Hct2)–1], where Hct1 and Hct2 are the initial and subsequent hematocrits, respectively, and N is the volume of isotonic solution infused. In vivo validation and modification is needed before clinical application of this model. PMID:27134507
NASA Astrophysics Data System (ADS)
Yang, Qiong; Wang, Hua-Lei; Chai, Qing-Zhen; Liu, Min-Liang
2015-09-01
Total Routhian surface (TRS) calculations for even-even N = 76 isotones with 54 ≤ Z ≤ 68 have been performed in three-dimensional (β2, γ, β4) deformation space. Calculated results of the equilibrium deformations are presented and compared with other theoretical predictions and available experimental data. The behavior of collective angular momentum shows the neutron rotation-alignment is preferred in the lighter N = 76 isotones, while for the heavier ones the proton alignment is favored. Moreover, multi-pair nucleon alignments and their competition (e.g., in 144Er) are predicted. It is pointed out that these nuclei in the N = 76 isotonic chain exhibit triaxiality or γ softness in high-spin states as well as ground states. Based on deformation-energy curves with respect to axial and non-axial quadrupole deformations, the shape instabilities are evaluated in detail and predicted, particularly in γ direction. Such instabilities are also supported by the odd- and even-spin level staggering of the observed γ bands, which is usually used to distinguish between γ-rigid and γ7-soft asymmetry. Supported by National Natural Science Foundation of China (10805040, 11175217), Foundation and Advanced Technology Research Program of Henan Province(132300410125), S & T Research Key Program of Henan Province Education Department (13A140667)
Nomogram for Obtaining Z-Test Statistic from Kendall's S and Sample Size 10 to 50.
ERIC Educational Resources Information Center
Graney, Marshall J.
1979-01-01
Kendall's S is often used for measuring magnitude of bivariate association in social and behavioral research. This nomogram permits a research analyst to make rapid and accurate evaluation of the statistical significance of S without recourse to formulae or computations in all except borderline cases. (Author/CTM)
Modeling the Growth of Students' Covariational Reasoning during an Introductory Statistics Course
ERIC Educational Resources Information Center
Zieffler, Andrew S.; Garfield, Joan B.
2009-01-01
This study examined students' development of reasoning about quantitative bivariate data during a one-semester university-level introductory statistics course. There were three research questions of interest: (1) What is the nature, or pattern of change in students' development in reasoning throughout the course?; (2) Is the sequencing of…
Akdur, Hülya; Yigit, Zerrin; Arabaci, Umit; Polat, Mine Gülden; Gürses, Hülya Nilgün; Güzelsoy, Deniz
2002-11-01
The aim of the present study was to evaluate the tolerance to various exercises by determining the cardiovascular response to static and dynamic exercises in patients with nonvalvular atrial fibrillation. Fifty patients (mean age: 63.6 +/- 10.3 years; male: 25, female: 25) with chronic (more than one year) nonvalvular atrial fibrillation were included in the study. All patients underwent exercise tests, adjusted appropriately according to their symptoms, as dynamic exercise on a Marquette Case 15 device according to a modified Bruce protocol. Heart rate, and systolic and diastolic arterial pressures were measured at rest and at all stages of the exercise; and the heart rate-pressure products were evaluated. A handgrip test was also conducted as static exercise. The measurements were made before, at the 1st, 2nd and 3rd minutes, and in the recovery periods of the exercise. The percent values of the changes of the 1st, 2nd and 3rd minute measurements in relation to the initial values for both exercises were compared. In addition, the maximal responses to the exercise tests and the post exercise values were also compared. For statistical evaluations, the paired Student-t test was used. Heart rate and pressure-heart rate product values obtained at 1, 2, and 3 minutes during the treadmill exercise test were significantly high compared to the handgrip values (P < 0.0001). The arterial systolic and diastolic pressure values in the 1st minute were also significantly higher during the handgrip test (P = 0.0100 and P = 0.0320, respectively). The values of diastolic arterial pressure at the 2nd minute during the handgrip test, and systolic arterial pressure at the 3rd minute during the treadmill test were found to be statistically significant (P = 0.0240, P = 0.0340, respectively). The mean exercise time and MET value during the treadmill exercise test were 7.18 +/- 2.65 minutes and 5.32 +/- 1.38 mL.kg(-1) x dk(-1). respectively. During the recovery period, the 5th minute
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Motsa, S. S.; Magagula, V. M.; Sibanda, P.
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Estimating the Correlation in Bivariate Normal Data with Known Variances and Small Sample Sizes1
Fosdick, Bailey K.; Raftery, Adrian E.
2013-01-01
We consider the problem of estimating the correlation in bivariate normal data when the means and variances are assumed known, with emphasis on the small sample case. We consider eight different estimators, several of them considered here for the first time in the literature. In a simulation study, we found that Bayesian estimators using the uniform and arc-sine priors outperformed several empirical and exact or approximate maximum likelihood estimators in small samples. The arc-sine prior did better for large values of the correlation. For testing whether the correlation is zero, we found that Bayesian hypothesis tests outperformed significance tests based on the empirical and exact or approximate maximum likelihood estimators considered in small samples, but that all tests performed similarly for sample size 50. These results lead us to suggest using the posterior mean with the arc-sine prior to estimate the correlation in small samples when the variances are assumed known. PMID:23378667
Tavlarides, Lawrence L.; Bae, Jae-Heum
1991-01-01
A laser capillary spectrophotometric technique measures real time or near real time bivariate drop size and concentration distribution for a reactive liquid-liquid dispersion system. The dispersion is drawn into a precision-bore glass capillary and an appropriate light source is used to distinguish the aqueous phase from slugs of the organic phase at two points along the capillary whose separation is precisely known. The suction velocity is measured, as is the length of each slug from which the drop free diameter is calculated. For each drop, the absorptivity at a given wavelength is related to the molar concentration of a solute of interest, and the concentration of given drops of the organic phase is derived from pulse heights of the detected light. This technique permits on-line monitoring and control of liquid-liquid dispersion processes.
Tavlarides, L.L.; Bae, J.H.
1991-12-24
A laser capillary spectrophotometric technique measures real time or near real time bivariate drop size and concentration distribution for a reactive liquid-liquid dispersion system. The dispersion is drawn into a precision-bore glass capillary and an appropriate light source is used to distinguish the aqueous phase from slugs of the organic phase at two points along the capillary whose separation is precisely known. The suction velocity is measured, as is the length of each slug from which the drop free diameter is calculated. For each drop, the absorptivity at a given wavelength is related to the molar concentration of a solute of interest, and the concentration of given drops of the organic phase is derived from pulse heights of the detected light. This technique permits on-line monitoring and control of liquid-liquid dispersion processes. 17 figures.
Bayesian bivariate generalized Lindley model for survival data with a cure fraction.
Martinez, Edson Z; Achcar, Jorge A
2014-11-01
The cure fraction models have been widely used to analyze survival data in which a proportion of the individuals is not susceptible to the event of interest. In this article, we introduce a bivariate model for survival data with a cure fraction based on the three-parameter generalized Lindley distribution. The joint distribution of the survival times is obtained by using copula functions. We consider three types of copula function models, the Farlie-Gumbel-Morgenstern (FGM), Clayton and Gumbel-Barnett copulas. The model is implemented under a Bayesian framework, where the parameter estimation is based on Markov Chain Monte Carlo (MCMC) techniques. To illustrate the utility of the model, we consider an application to a real data set related to an invasive cervical cancer study. PMID:25123102
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews
Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao
2014-01-01
Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the non-convergence problem, which is non-trivial when the number of studies are relatively small, the computational simplicity and some robustness to model mis-specifications. Simulation studies show that the composite likelihood method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma. PMID:25512146
Knowles, Emma E. M.; McKay, D. Reese; Kent, Jack W.; Sprooten, Emma; Carless, Melanie A.; Curran, Joanne E.; de Almeida, Marcio A. A.; Dyer, Thomas D.; Göring, Harald H. H.; Olvera, Rene; Duggirala, Ravi; Fox, Peter; Almasy, Laura; Blangero, John; Glahn, David. C.
2014-01-01
The role of the amygdala in emotion recognition is well established and separately each trait has been shown to be highly heritable, but the potential role of common genetic influences on both traits has not been explored. Here we present an investigation of the pleiotropic influences of amygdala and emotion recognition in a sample of randomly selected, extended pedigrees (N = 858). Using a combination of univariate and bivariate linkage we found a pleiotropic region for amygdala and emotion recognition on 4q26 (LOD = 4.34). Association analysis conducted in the region underlying the bivariate linkage peak revealed a variant meeting the corrected significance level (pBonferroni = 5.01×10−05) within an intron of PDE5A (rs2622497, Χ2 =16.67, p = 4.4×10−05) as being jointly influential on both traits. PDE5A has been implicated previously in recognition-memory deficits and is expressed in subcortical structures that are thought to underlie memory ability including the amygdala. The present paper extends our understanding of the shared etiology between amygdala and emotion recognition by showing that the overlap between the two traits is due, at least in part, to common genetic influences. Moreover, the present paper identifies a pleiotropic locus for the two traits and an associated variant, which localizes the genetic signal even more precisely. These results, when taken in the context of previous research, highlight the potential utility of PDE5-inhibitors for ameliorating emotion-recognition deficits in populations including, but not exclusively, those individuals suffering from mental or neurodegenerative illness. PMID:25322361
NASA Astrophysics Data System (ADS)
Requena, Ana; Prosdocimi, Ilaria; Kjeldsen, Thomas R.; Mediero, Luis
2014-05-01
Flood frequency analyses based on stationary assumptions are usually employed for estimating design floods. However, more complex non-stationarity approaches are trying to be incorporated with the aim of improving such estimates. In this study, the effect of changing urbanisation on maximum flood peak (Q) and volume (V) series is analysed. The potential changes in an urbanised catchment and in a nearby hydrologically similar rural catchment in northwest England are investigated. The urbanised catchment is characterised by a noticeable increase of the urbanisation level in time, while the rural catchment has not been altered by anthropogenic actions. Winter, summer and annual maximum flood events are studied. With the aim of analysing changes in time, two non-superimposed time-windows are defined covering the periods 1976-1992 and 1993-2008, respectively. A preliminary analysis of temporal trends in Q, V and Kendall's tau is visually done, being formal tested by a resampling procedure. Differences were found among winter, summer and annual maximum flood events. As annual maximum flood events are commonly used for designing purposes, the corresponding bivariate distribution (margins and copula) was obtained for the different time-windows. Trends regarding both time-windows were analysed by comparing bivariate return period curves in the Q-V space. Different behaviours were found depending on the catchment. As a result, the application of the proposed methodology provides useful information in describing changes in flood events, regarding different flood variables and their relationship. In addition, the methodology can inform practitioners on the potential changes connected with urbanisation for appropriate design flood estimation.
A bivariate mixture model analysis of body weight and ascites traits in broilers.
Zerehdaran, S; van Grevehof, E M; van der Waaij, E H; Bovenhuis, H
2006-01-01
The objective of the present study was to use bivariate mixture models to study the relationships between body weight (BW) and ascites indicator traits. Existing data were used from an experiment in which birds were housed in 2 groups under different climate conditions. In the first group, BW, the ratio of right ventricular weight to total ventricular weight (RV:TV), and hematocrit value (HCT) were measured in 4,202 broilers under cold conditions; in the second group, the same traits were measured in 795 birds under normal temperature conditions. Cold-stress conditions were applied to identify individuals that were susceptible to ascites. The RV:TV and HCT were approximately normally distributed under normal temperature conditions, whereas the distributions of these traits were skewed under cold temperature conditions, suggesting different underlying distributions. Fitting a bivariate mixture model to the observations showed that there was only one homogeneous population for ascites traits under normal temperature conditions, whereas there was a mixture of (2) distributions under cold conditions. One distribution contained nonascitic birds and the other distribution contained ascitic birds. In the distribution of nonascitic birds, the inferred phenotypic correlations (phenotypic correlations with 2 distinguishing underlying distributions) of BW with RV:TV and HCT were close to zero (0.10 and -0.07, respectively), whereas in the distribution of ascitic birds, the inferred phenotypic correlations of BW with RV:TV and HCT were negative (-0.39 and -0.4, respectively). The negative inferred correlations of BW with RV:TV and HCT in the distribution of ascitic birds resulted in negative overall correlations (correlations without 2 distinguishing distributions) of BW with RV:TV (-0.30) and HCT (-0.37) under cold conditions. The present results indicate that the overall correlations between BW and ascites traits are dependent on the relative frequency of ascitic and
Shi, Runhua; McLarty, Jerry W
2009-10-01
In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications. PMID:19891281
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2008-01-01
As a branch of knowledge, Statistics is ubiquitous and its applications can be found in (almost) every field of human endeavour. In this article, the authors track down the possible source of the link between the "Siren song" and applications of Statistics. Answers to their previous five questions and five new questions on Statistics are presented.
ERIC Educational Resources Information Center
Callamaras, Peter
1983-01-01
This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)
ERIC Educational Resources Information Center
Meyer, Donald L.
Bayesian statistical methodology and its possible uses in the behavioral sciences are discussed in relation to the solution of problems in both the use and teaching of fundamental statistical methods, including confidence intervals, significance tests, and sampling. The Bayesian model explains these statistical methods and offers a consistent…
NASA Technical Reports Server (NTRS)
Bernauer, E. M.; Walby, W. F.; Ertl, A. C.; Dempster, P. T.; Bond, M.; Greenleaf, J. E.
1994-01-01
To determine if daily isotonic exercise or isokinetic exercise training coupled with daily log proprioceptive training, would influence log proprioceptive tracking responses during Bed Rest (BR), 19 men (36 +/- SD 4 years, 178 +/- 7 cm, 76.8 +/- 7.8 kg) were allocated into a NO-Exercise (NOE) training control group (n = 5), and IsoTanic Exercise (ITE, n = 7) and IsoKinetic Exercise (IKE, n = 7) training groups. Exercise training was conducted during BR for two 30-min period / d, 5 d /week. Only the IKE group performed proprioceptive training using a now isokinetic procedure with each lower extremity for 2.5 min before and after the daily exercise training sessions; proprioceptive testing occurred weekly for all groups. There were no significant differences in proprioceptive tracking scores, expressed as a percentage of the perfect score of 100, in the pro-BR ambulatory control period between the three groups. Knee extension and flexion tracking responses were unchanged with NOE during BR, but were significantly greater (*p less than 0.05) at the end of BR in both exercise groups when compared with NOE responses (extension: NOE 80.7 +/- 0.7%, ITE 82.9 +/- 0.6%, IKE 86.5* +/- 0.7%; flexion: NOE 77.6 +/- 1.50, ITE 80.0 +/- 0.8% (NS), IKE 83.6* +/- 0.8%). Although proprioceptive tracking was unchanged during BR with NOE, both lsotonic exercise training (without additional propriaceptive training) and especially isokinetic exercise training when combined with daily proprioceptive training, significantly improved knee proprioceptive tracking responses after 30 d of BR.
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Lee, P. L.; Ellis, S.; Selzer, R. H.; Ortendahl, D. A.
1994-01-01
Magnetic resonance imaging (MRI) was used to compare the effect of two modes of lower-extremity exercise training on the mass (volume) of posterior leg group (PLG) muscles (soleus, flexor hallucis longus, tibialis posterior, lateral and medial gastrocnemius, and flexor digitorum longus) on 19 men (ages 32-42 years) subjected to intense dynamic-isotonic (ITE, cycle ergometer, number of subjects (N) = 7), isokinetic (IKE, torque egrometer, N = 7), and no exercise (NOE, N = 5) training for 60 min/day during head-down bed rest (HDBR). Total volume of the PLG muscles decreased (p less than 0.05) similarly: ITE = 4.3 +/- SE 1.6%, IKE = 7.7 +/- 1.6%, and NOE = 6.3 +/- 0.8%; combined volume (N = 19) loss was 6.1 +/- 0.9%. Ranges of volume changes were 2.6% to -9.0% (ITE), -2.1% to -14.9% (IKE), and -3.4% to -8/1% (NOE). Correlation coefficients (r) of muscle volume versus thickness measured with ultrasonography were: ITE r + 0.79 (p less than 0.05), IKE r = 0.27 (not significant (NS)), and NOE r = 0.63 (NS). Leg-muscle volume and thickness were highly correlated (r = 0.79) when plasma volume was maintained during HDBR with ITE. Thus, neither intensive lower extremity ITE nor IKE training influence the normal non-exercised posterior leg muscle atrophy during HDBR. The relationship of muscle volume and thickness may depend on the mode of exercise training associated with the maintenance of plasma volume.
NASA Technical Reports Server (NTRS)
Fitts, R. H.; Hurst, J. E.; Norenberg, K. M.; Widrick, J. J.; Riley, D. A.; Bain, J. L. W.; Trappe, S. W.; Trappe, T. A.; Costill, D. L.
1999-01-01
Exposure to microgravity or models designed to mimic the unloaded condition, such as bed rest in humans and hindlimb unloading (HU) in rats leads to skeletal muscle atrophy, a loss in peak force and power, and an increased susceptibility to fatigue. The posterior compartment muscles of the lower leg (calf muscle group) appear to be particularly susceptible. Following only 1 wk in space or HU, rat soleus muscle showed a 30 to 40% loss in wet weight. After 3 wk of HU, almost all of the atrophied soleus fibers showed a significant increase in maximal shortening velocity (V(sub 0)), while only 25 to 30 % actually transitioned to fast fibers. The increased V(sub 0), was protective in that it reduced the decline in peak power associated with the reduced peak force. When the soleus is stimulated in situ following HU or zero-g one observes an increased rate and extent of fatigue, and in the former the increased fatigue is associated with a more rapid depletion of muscle glycogen and lactate production. Our working hypothesis is that following HU or spaceflight in rats and bed rest or spaceflight in humans limb skeletal muscles during contractile activity depend more on carbohydrates and less on fatty acids for their substrate supply. Baldwin et al. found 9 days of spaceflight to reduce by 37% the ability of both the high and low oxidative regions of the vastus muscle to oxidize long-chain fatty acids. This decline was not associated with any change in the enzymes of the tricarboxylic acid cycle or oxidation pathway. The purpose of the current research was to establish the extent of functional change in the slow type I and fast type H fibers of the human calf muscle following 17 days of spaceflight, and determine the cellular mechanisms of the observed changes. A second goal was to study the effectiveness of high resistance isotonic and isometric exercise in preventing the deleterious functional changes associated with unloading.
Petrovic, Igor; Hip, Ivan; Fredlund, Murray D
2016-09-01
The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The
NASA Astrophysics Data System (ADS)
Masud, M. B.; Khaliq, M. N.; Wheater, H. S.
2015-03-01
This study is focused on the Saskatchewan River Basin (SRB) that spans southern parts of Alberta, Saskatchewan and Manitoba, the three Prairie Provinces of Canada, where most of the country's agricultural activities are concentrated. The SRB is confronted with immense water-related challenges and is now one of the ten GEWEX (Global Energy and Water Exchanges) Regional Hydroclimate Projects in the world. In the past, various multi-year droughts have been observed in this part of Canada that impacted agriculture, energy and socio-economic sectors. Therefore, proper understanding of the spatial and temporal characteristics of historical droughts is important for many water resources planning and management related activities across the basin. In the study, observed gridded data of daily precipitation and temperature and conventional univariate and copula-based bivariate frequency analyses are used to characterize drought events in terms of drought severity and duration on the basis of two drought indices, the Standardized Precipitation Index (SPI) and the Standardized Precipitation Evapotranspiration Index (SPEI). Within the framework of univariate and bivariate analyses, drought risk indicators are developed and mapped across the SRB to delineate the most vulnerable parts of the basin. Based on the results obtained, southern parts of the SRB (i.e., western part of the South Saskatchewan River, Seven Persons Creek and Bigstick Lake watersheds) are associated with a higher drought risk, while moderate risk is noted for the North Saskatchewan River (except its eastern parts), Red Deer River, Oldman River, Bow River, Sounding Creek, Carrot River and Battle River watersheds. Lower drought risk is found for the areas surrounding the Saskatchewan-Manitoba border (particularly, the Saskatchewan River watershed). It is also found that the areas characterized with higher drought severity are also associated with higher drought duration. A comparison of SPI- and SPEI
NASA Astrophysics Data System (ADS)
Subramaniyam, Narayan Puthanmadam; Hyttinen, Jari
2015-02-01
Recently Andrezejak et al. combined the randomness and nonlinear independence test with iterative amplitude adjusted Fourier transform (iAAFT) surrogates to distinguish between the dynamics of seizure-free intracranial electroencephalographic (EEG) signals recorded from epileptogenic (focal) and nonepileptogenic (nonfocal) brain areas of epileptic patients. However, stationarity is a part of the null hypothesis for iAAFT surrogates and thus nonstationarity can violate the null hypothesis. In this work we first propose the application of the randomness and nonlinear independence test based on recurrence network measures to distinguish between the dynamics of focal and nonfocal EEG signals. Furthermore, we combine these tests with both iAAFT and truncated Fourier transform (TFT) surrogate methods, which also preserves the nonstationarity of the original data in the surrogates along with its linear structure. Our results indicate that focal EEG signals exhibit an increased degree of structural complexity and interdependency compared to nonfocal EEG signals. In general, we find higher rejections for randomness and nonlinear independence tests for focal EEG signals compared to nonfocal EEG signals. In particular, the univariate recurrence network measures, the average clustering coefficient C and assortativity R , and the bivariate recurrence network measure, the average cross-clustering coefficient Ccross, can successfully distinguish between the focal and nonfocal EEG signals, even when the analysis is restricted to nonstationary signals, irrespective of the type of surrogates used. On the other hand, we find that the univariate recurrence network measures, the average path length L , and the average betweenness centrality BC fail to distinguish between the focal and nonfocal EEG signals when iAAFT surrogates are used. However, these two measures can distinguish between focal and nonfocal EEG signals when TFT surrogates are used for nonstationary signals. We also
Kogalovskii, M.R.
1995-03-01
This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.
Smith, Alwyn
1969-01-01
This paper is based on an analysis of questionnaires sent to the health ministries of Member States of WHO asking for information about the extent, nature, and scope of morbidity statistical information. It is clear that most countries collect some statistics of morbidity and many countries collect extensive data. However, few countries relate their collection to the needs of health administrators for information, and many countries collect statistics principally for publication in annual volumes which may appear anything up to 3 years after the year to which they refer. The desiderata of morbidity statistics may be summarized as reliability, representativeness, and relevance to current health problems. PMID:5306722
Omouessi, S T; Lemamy, G J; Kiki-Mvouaka, S; Fernette, B; Falconetti, C; Ndeboko, B; Mouecoucou, J; Thornton, S N
2016-02-01
In the course of exposure to fluid deprivation and heated environment, mammals regulate their hydromineral balance and body temperature by a number of mechanisms including sweating, water and salt intakes. Here we challenged obese Zucker rats, known to have a predisposition to hypertension, with 0.9%NaCl alone or with 2%NaCl solution + water to drink under fluid deprivation and heated conditions. Food and fluid intakes, body weight, diuresis and natriuresis were measured daily throughout. Serum aldosterone levels and Na(+) concentration were also analyzed. Data showed that obese and lean rats presented similar baseline measurements of food, 0.9%NaCl and fluid intakes, diuresis and fluid balance; whereas hypertonic 2%NaCl consumption was almost absent. Before and during fluid deprivation animals increased isotonic but not hypertonic NaCl intake; the obese showed significant increases in diuresis and Na(+) excretion, whereas, total fluid intake was similar between groups. Heat increased isotonic NaCl intake and doubled natriuresis in obese which were wet on their fur and displayed a paradoxical increase of fluid gain. Fluid deprivation plus heat produced similar negative fluid balance in all groups. Body weight losses, food intake and diuresis reductions were amplified under the combined conditions. Animals exposed to 2%NaCl showed higher circulating levels of aldosterone and obese were lower than leans. In animals which drank 0.9%NaCl, obese showed higher serum levels of Na(+) than leans. We conclude that in spite of their higher sensitivity to high salt and heat obese Zucker rats can control hydromineral balance in response to fluid deprivation and heat by adjusting isotonic NaCl preference with sodium balance and circulating levels of aldosterone. This suggests a key hormonal role in the mechanisms underlying thermoregulation, body fluid homeostasis and sodium intake. PMID:26621332
NASA Astrophysics Data System (ADS)
Shin, J.; Joo, K.; Park, J.; Heo, J.
2011-12-01
The copula model is broadly applied and studied in a hydrological field. The copula model is easier and more flexible to construct multivariate model than conventional multivariate model. Because of above statement and characteristic of the copula model which is a kind of distribution, in the hydrological field the copula model is frequently studied for multivariate frequency analysis. When the copula model is applied for frequency analysis, choosing applicable copula model is difficult yet. In this study, to detect applicable copula model Cramer-von-Mises and Kolmogorov-Smirnov tests, which are suggested by Genest et al. (2009), are applied. For estimation of copula parameter, maximum pseudo-likelihood estimation method and Kendall's tau estimation method are applied. Rainfall data recorded by five weather stations, which are Seoul, Chuncheon, Gangneung, Wonju, and Chungju and managed by Korea Meteorological Administration (KMA), are applied for frequency analysis. For bivariate frequency analysis, amount (total depth) and duration are selected and applied. Frank, Gumbel-Hougaard, Joe, and Clayton families are applied (Joe, 1997; Nelsen 1999). A critical value is rejected on p-value 5%. A rejection rate and p-value of copulas are compared.
The Bivariate Luminosity--HI Mass Distribution Function of Galaxies based on the NIBLES Survey
NASA Astrophysics Data System (ADS)
Butcher, Zhon; Schneider, Stephen E.; van Driel, Wim; Lehnert, Matt
2016-01-01
We use 21cm HI line observations for 2610 galaxies from the Nançay Interstellar Baryons Legacy Extragalactic Survey (NIBLES) to derive a bivariate luminosity--HI mass distribution function. Our HI survey was selected to randomly probe the local (900 < cz < 12,000 km/s) galaxy population in each 0.5 mag wide bin for the absolute z-band magnitude range of -13.5 < Mz < -24 without regard to morphology or color. This targeted survey allowed more on-source integration time for weak and non-detected sources, enabling us to probe lower HI mass fractions and apply lower upper limits for non-detections than would be possible with the larger blind HI surveys. Additionally, we obtained a factor of four higher sensitivity follow-up observations at Arecibo of 90 galaxies from our non-detected and marginally detected categories to quantify the underlying HI distribution of sources not detected at Nançay. Using the optical luminosity function and our higher sensitivity follow up observations as priors, we use a 2D stepwise maximum likelihood technique to derive the two dimensional volume density distribution of luminosity and HI mass in each SDSS band.
IDF relationships using bivariate copula for storm events in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Ariff, N. M.; Jemain, A. A.; Ibrahim, K.; Wan Zin, W. Z.
2012-11-01
SummaryIntensity-duration-frequency (IDF) curves are used in many hydrologic designs for the purpose of water managements and flood preventions. The IDF curves available in Malaysia are those obtained from univariate analysis approach which only considers the intensity of rainfalls at fixed time intervals. As several rainfall variables are correlated with each other such as intensity and duration, this paper aims to derive IDF points for storm events in Peninsular Malaysia by means of bivariate frequency analysis. This is achieved through utilizing the relationship between storm intensities and durations using the copula method. Four types of copulas; namely the Ali-Mikhail-Haq (AMH), Frank, Gaussian and Farlie-Gumbel-Morgenstern (FGM) copulas are considered because the correlation between storm intensity, I, and duration, D, are negative and these copulas are appropriate when the relationship between the variables are negative. The correlations are attained by means of Kendall's τ estimation. The analysis was performed on twenty rainfall stations with hourly data across Peninsular Malaysia. Using Akaike's Information Criteria (AIC) for testing goodness-of-fit, both Frank and Gaussian copulas are found to be suitable to represent the relationship between I and D. The IDF points found by the copula method are compared to the IDF curves yielded based on the typical IDF empirical formula of the univariate approach. This study indicates that storm intensities obtained from both methods are in agreement with each other for any given storm duration and for various return periods.
Ayuso, Mercedes; Bermúdez, Lluís; Santolino, Miguel
2016-04-01
The analysis of factors influencing the severity of the personal injuries suffered by victims of motor accidents is an issue of major interest. Yet, most of the extant literature has tended to address this question by focusing on either the severity of temporary disability or the severity of permanent injury. In this paper, a bivariate copula-based regression model for temporary disability and permanent injury severities is introduced for the joint analysis of the relationship with the set of factors that might influence both categories of injury. Using a motor insurance database with 21,361 observations, the copula-based regression model is shown to give a better performance than that of a model based on the assumption of independence. The inclusion of the dependence structure in the analysis has a higher impact on the variance estimates of the injury severities than it does on the point estimates. By taking into account the dependence between temporary and permanent severities a more extensive factor analysis can be conducted. We illustrate that the conditional distribution functions of injury severities may be estimated, thus, providing decision makers with valuable information. PMID:26871615
Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.
2012-01-01
Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.
Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd
2014-05-01
Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. PMID:24632403
Bivariate Frequency Analysis with Nonstationary Gumbel/GEV Marginal Distributions for Rainfall Event
NASA Astrophysics Data System (ADS)
Joo, Kyungwon; Kim, Sunghun; Kim, Hanbeen; Ahn, Hyunjun; Heo, Jun-Haeng
2016-04-01
Multivariate frequency analysis has been developing for hydrological data recently. Particularly, the copula model has been used as an effective method which has no limitation on deciding marginal distributions. The time-series rainfall data can be characterized to rainfall event by inter-event time definition and each rainfall event has rainfall depth and duration. In addition, changes in rainfall depth have been studied recently due to climate change. The nonstationary (time-varying) Gumbel and Generalized Extreme Value (GEV) have been developed and their performances have been investigated from many studies. In the current study, bivariate frequency analysis has performed for rainfall depth and duration using Archimedean copula on stationary and nonstationary hourly rainfall data to consider the effect of climate change. The parameter of copula model is estimated by inference function for margin (IFM) method and stationary/nonstationary Gumbel and GEV distributions are used for marginal distributions. As a result, level curve of copula model is obtained and goodness-of-fit test is performed to choose appropriate marginal distribution among the applied stationary and nonstationary Gumbel and GEV distributions.
On a bivariate spectral relaxation method for unsteady magneto-hydrodynamic flow in porous media.
Magagula, Vusi M; Motsa, Sandile S; Sibanda, Precious; Dlamini, Phumlani G
2016-01-01
The paper presents a significant improvement to the implementation of the spectral relaxation method (SRM) for solving nonlinear partial differential equations that arise in the modelling of fluid flow problems. Previously the SRM utilized the spectral method to discretize derivatives in space and finite differences to discretize in time. In this work we seek to improve the performance of the SRM by applying the spectral method to discretize derivatives in both space and time variables. The new approach combines the relaxation scheme of the SRM, bivariate Lagrange interpolation as well as the Chebyshev spectral collocation method. The technique is tested on a system of four nonlinear partial differential equations that model unsteady three-dimensional magneto-hydrodynamic flow and mass transfer in a porous medium. Computed solutions are compared with previously published results obtained using the SRM, the spectral quasilinearization method and the Keller-box method. There is clear evidence that the new approach produces results that as good as, if not better than published results determined using the other methods. The main advantage of the new approach is that it offers better accuracy on coarser grids which significantly improves the computational speed of the method. The technique also leads to faster convergence to the required solution. PMID:27119059
NASA Astrophysics Data System (ADS)
Winahju, W. S.; Mukarromah, A.; Putri, S.
2015-03-01
Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Insana, Michael F.; Eckstein, Miguel P.; Boone, John M.
2007-03-01
Validating the use of new imaging technologies for screening large patient populations is an important and very challenging area of diagnostic imaging research. A particular concern in ROC studies evaluating screening technologies is the problem of verification bias, in which an independent verification of disease status is only available for a subpopulation of patients, typically those with positive results by a current screening standard. For example, in screening mammography, a study might evaluate a new approach using a sample of patients that have undergone needle biopsy following a standard mammogram and subsequent work-up. This case sampling approach provides accurate independent verification of ground truth and increases the prevalence of disease cases. However, the selection criteria will likely bias results of the study. In this work we present an initial exploration of an approach to correcting this bias within the parametric framework of binormal assumptions. We posit conditionally bivariate normal distributions on the latent decision variable for both the new methodology as well as the screening standard. In this case, verification bias can be seen as the effect of missing data from an operating point in the screening standard. We examine the magnitude of this bias in the setting of breast cancer screening with mammography, and we derive a maximum likelihood approach to estimating bias corrected ROC curves in this model.
NASA Astrophysics Data System (ADS)
Su, Su; Kijewski-Correa, Tracy; Pando Balandra, Juan Francisco
2009-03-01
This study focuses on embeddable algorithms that operate within multi-scale wireless sensor networks for damage detection in civil infrastructure systems, and in specific, the Bivariate Regressive Adaptive INdex (BRAIN) to detect damage in structures by examining the changes in regressive coefficients of time series models. As its name suggests, BRAIN exploits heterogeneous sensor arrays by a data-driven damage feature (DSF) to enhance detection capability through the use of two types of response data, each with its own unique sensitivities to damage. While previous studies have shown that BRAIN offers more reliable damage detection, a number of factors contributing to its performance are explored herein, including observability, damage proximity/severity, and relative signal strength. These investigations also include an experimental program to determine if performance is maintained when implementing the approaches in physical systems. The results of these investigations will be used to further verify that the use of heterogeneous sensing enhances overall detection capability of such data-driven damage metrics.
Performance verification of bivariate regressive adaptive index for structural health monitoring
NASA Astrophysics Data System (ADS)
Su, Su; Kijewski-Correa, Tracy
2007-04-01
This study focuses on data-driven methods for structural health monitoring and introduces a Bivariate Regressive Adaptive INdex (BRAIN) for damage detection in a decentralized, wireless sensor network. BRAIN utilizes a dynamic damage sensitive feature (DSF) that automatically adapts to the data set, extracting the most damage sensitive model features, which vary with location, damage severity, loading condition and model type. This data-driven feature is key to providing the most flexible damage sensitive feature incorporating all available data for a given application to enhance reliability by including heterogeneous sensor arrays. This study will first evaluate several regressive-type models used for time-series damage detection, including common homogeneous formats and newly proposed heterogeneous descriptors and then demonstrate the performance of the newly proposed dynamic DSF against a comparable static DSF. Performance will be validated by documenting their damage success rates on repeated simulations of randomly-excited thin beams with minor levels of damage. It will be shown that BRAIN dramatically increases the detection capabilities over static, homogeneous damage detection frameworks.
Adaptive Randomization to Improve Utility-Based Dose-Finding with Bivariate Ordinal Outcomes
Nguyen, Hoang Q.
2012-01-01
Summary A sequentially outcome-adaptive Bayesian design is proposed for choosing the dose of an experimental therapy based on elicited utilities of a bivariate ordinal (toxicity, efficacy) outcome. Subject to posterior acceptability criteria to control the risk of severe toxicity and exclude unpromising doses, patients are randomized adaptively among the doses having posterior mean utilities near the maximum. The utility increment used to define near-optimality is non-increasing with sample size. The adaptive randomization uses each dose’s posterior probability of a set of good outcomes, defined by a lower utility cut-off. Saturated parametric models are assumed for the marginal dose-toxicity and dose-efficacy distributions, allowing the possible requirement of monotonicity in dose, and a copula is used to obtain a joint distribution. Prior means are computed by simulation using elicited outcome probabilities, and prior variances are calibrated to control prior effective sample size and obtain a design with good operating characteristics. The method is illustrated by a phase I/II trial of radiation therapy for children with brain stem gliomas. PMID:22651115
Bivariate wavelet-based clustering of sea-level and atmospheric pressure time series
NASA Astrophysics Data System (ADS)
Barbosa, Susana; Gouveia, Sonia; Scotto, Manuel; Alonso, Andres
2015-04-01
The atmospheric pressure is responsible for a downward force acting on the sea surface which is compensated, to some extent, by corresponding sea-level variations. The static response of the sea surface can be linearly modelled, a decrease (increase) in atmospheric pressure of 1 mb raising (depressing) sea level by 1 cm. However, the dynamic sea surface response to atmospheric pressure loading, associated with ocean dynamics and wind effects, is scale-dependent and difficult to establish. The present study addresses the co-variability of sea-level and pressure time series in the Baltic Sea from the bivariate analysis of tide gauge and reanalysis records. The time series are normalised by the corresponding standard deviation and the wavelet covariance is computed as a measure of the association between sea-level and pressure across scales. A clustering procedure using a dissimilarity matrix based on the wavelet covariance is then implemented. Different classical clustering techniques, including average, single and complete linkage criteria are applied and the group linkage is selected in order to maximise the dendrogram's goodness-of-fit.
A Basic Bivariate Structure of Personality Attributes Evident Across Nine Languages.
Saucier, Gerard; Thalmayer, Amber Gayle; Payne, Doris L; Carlson, Robert; Sanogo, Lamine; Ole-Kotikash, Leonard; Church, A Timothy; Katigbak, Marcia S; Somer, Oya; Szarota, Piotr; Szirmák, Zsofia; Zhou, Xinyue
2013-01-01
Here, two studies seek to characterize a parsimonious common-denominator personality structure with optimal cross-cultural replicability. Personality differences are observed in all human populations and cultures, but lexicons for personality attributes contain so many distinctions that parsimony is lacking. Models stipulating the most important attributes have been formulated by experts or by empirical studies drawing on experience in a very limited range of cultures. Factor analyses of personality lexicons of nine languages of diverse provenance (Chinese, Korean, Filipino, Turkish, Greek, Polish, Hungarian, Maasai, and Senoufo) were examined, and their common structure was compared to that of several prominent models in psychology. A parsimonious bivariate model showed evidence of substantial convergence and ubiquity across cultures. Analyses involving key markers of these dimensions in English indicate that they are broad dimensions involving the overlapping content of the interpersonal circumplex, models of communion and agency, and morality/warmth and competence. These "Big Two" dimensions-Social Self-Regulation and Dynamism-provide a common-denominator model involving the two most crucial axes of personality variation, ubiquitous across cultures. The Big Two might serve as an umbrella model serving to link diverse theoretical models and associated research literatures. PMID:23301793
Improving the chi-squared approximation for bivariate normal tolerance regions
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.
1993-01-01
Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1990-01-01
The level of skill in predicting the size of the sunspot cycle is investigated for the two types of precursor techniques, single variate and bivariate fits, both applied to cycle 22. The present level of growth in solar activity is compared to the mean level of growth (cycles 10-21) and to the predictions based on the precursor techniques. It is shown that, for cycle 22, both single variate methods (based on geomagnetic data) and bivariate methods suggest a maximum amplitude smaller than that observed for cycle 19, and possibly for cycle 21. Compared to the mean cycle, cycle 22 is presently behaving as if it were a +2.6 sigma cycle (maximum amplitude of about 225), which means that either it will be the first cycle not to be reliably predicted by the combined precursor techniques or its deviation relative to the mean cycle will substantially decrease over the next 18 months.
Bivariate segmentation of SNP-array data for allele-specific copy number analysis in tumour samples
2013-01-01
Background SNP arrays output two signals that reflect the total genomic copy number (LRR) and the allelic ratio (BAF), which in combination allow the characterisation of allele-specific copy numbers (ASCNs). While methods based on hidden Markov models (HMMs) have been extended from array comparative genomic hybridisation (aCGH) to jointly handle the two signals, only one method based on change-point detection, ASCAT, performs bivariate segmentation. Results In the present work, we introduce a generic framework for bivariate segmentation of SNP array data for ASCN analysis. For the matter, we discuss the characteristics of the typically applied BAF transformation and how they affect segmentation, introduce concepts of multivariate time series analysis that are of concern in this field and discuss the appropriate formulation of the problem. The framework is implemented in a method named CnaStruct, the bivariate form of the structural change model (SCM), which has been successfully applied to transcriptome mapping and aCGH. Conclusions On a comprehensive synthetic dataset, we show that CnaStruct outperforms the segmentation of existing ASCN analysis methods. Furthermore, CnaStruct can be integrated into the workflows of several ASCN analysis tools in order to improve their performance, specially on tumour samples highly contaminated by normal cells. PMID:23497144
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Improving the Statistical Methodology of Astronomical Data Analysis
NASA Astrophysics Data System (ADS)
Feigelson, Eric D.; Babu, Gutti Jogesh
Contemporary observational astronomers are generally unfamiliar with the extensive advances made in mathematical and applied statistics during the past several decades. Astronomical problems can often be addressed by methods developed in statistical fields such as spatial point processes, density estimation, Bayesian statistics, and sampling theory. The common problem of bivariate linear regression illustrates the need for sophisticated methods. Astronomical problems often require combinations of ordinary least-squares lines, double-weighted and errors-in-variables models, censored and truncated regressions, each with its own error analysis procedure. The recent conference Statistical Challenges in Modern Astronomy highlighted issues of mutual interest to statisticians and astronomers including clustering of point processes and time series analysis. We conclude with advice on how the astronomical community can advance its statistical methodology with improvements in education of astrophysicists, collaboration and consultation with professional statisticians, and acquisition of new software.
NASA Astrophysics Data System (ADS)
Szolgay, Jan; Gaál, Ladislav; Bacigál, Tomáš; Kohnová, Silvia; Blöschl, Günter
2016-04-01
Bi-variate distributions of flood peaks and flood event volumes are needed for a range of practical purposes including e.g. retention basin design and identifying extent and duration of flooding in flood hazard zones. However, the selection of the types of bi-variate distributions and estimating their parameters from observed peak-volume pairs are associated with far larger uncertainties compared to uni-variate distributions, since observed flood records of required length are rarely available. This poses a serious problem to reliable flood risk estimation in bi-variate design cases. The aim of this contribution was to shed light on the possibility of reducing uncertainties in the estimation of the dependence models/parameters from a regional perspective. The peak-volume relationships were modeled in terms of copulas. Flood events were classified according to their origin. In order to reduce the uncertainty in estimating flood risk, pooling and analyzing catchments of similar behavior according to flood process types was attempted. Most of the work reported in the literature so far did not direct the multivariate analysis toward discriminating certain types of models regionally according to specific runoff generation processes. Specifically, the contribution addresses these problems: - Are the peak-volume relationships of different flood types for a given catchment similar? - Are the peak-volume dependence structures between catchments in a larger region for given flood types similar? - Are some copula types more suitable for given flood process types and does this have consequences for reliable risk estimation? The target region is located in the northern parts of Austria, and consists of 72 small and mid-sized catchments. Instead of the traditional approach that deals with annual maximum floods, the current analysis includes all independent flood events in the region. 24 872 flood events from the period 1976-2007 were identified, and classified as synoptic, flash
Martini, M.; Peru, S.; Dupuis, M.
2011-03-15
Low-energy dipole excitations in neon isotopes and N=16 isotones are calculated with a fully consistent axially-symmetric-deformed quasiparticle random phase approximation (QRPA) approach based on Hartree-Fock-Bogolyubov (HFB) states. The same Gogny D1S effective force has been used both in HFB and QRPA calculations. The microscopical structure of these low-lying resonances, as well as the behavior of proton and neutron transition densities, are investigated in order to determine the isoscalar or isovector nature of the excitations. It is found that the N=16 isotones {sup 24}O, {sup 26}Ne, {sup 28}Mg, and {sup 30}Si are characterized by a similar behavior. The occupation of the 2s{sub 1/2} neutron orbit turns out to be crucial, leading to nontrivial transition densities and to small but finite collectivity. Some low-lying dipole excitations of {sup 28}Ne and {sup 30}Ne, characterized by transitions involving the {nu}1d{sub 3/2} state, present a more collective behavior and isoscalar transition densities. A collective proton low-lying excitation is identified in the {sup 18}Ne nucleus.
Chen, Zhengjia; Wang, Zhibo; Wang, Haibin; Owonikoko, Taofeek K; Kowalski, Jeanne; Khuri, Fadlo R
2013-01-01
Isotonic Design using Normalized Equivalent Toxicity Score (ID-NETS) is a novel Phase I design that integrates the novel toxicity scoring system originally proposed by Chen et al. [1] and the original Isotonic Design proposed by Leung et al. [2]. ID-NETS has substantially improved the accuracy of maximum tolerated dose (MTD) estimation and trial efficiency in the Phase I clinical trial setting by fully utilizing all toxicities experienced by each patient and treating toxicity response as a quasi-continuous variable instead of a binary indicator of dose limiting toxicity (DLT). To facilitate the incorporation of the ID-NETS method into the design and conduct of Phase I clinical trials, we have designed and developed a user-friendly software, ID-NETS©TM, which has two functions: 1) Calculating the recommended dose for the subsequent patient cohort using available completed data; and 2) Performing simulations to obtain the operating characteristics of a trial designed with ID-NETS. Currently, ID-NETS©TMv1.0 is available for free download at http://winshipbbisr.emory.edu/IDNETS.html. PMID:23847695
Liu, Xiao-Gang; Liu, Yong-Jun; Liu, Jianfeng; Pei, Yufang; Xiong, Dong-Hai; Shen, Hui; Deng, Hong-Yi; Papasian, Christopher J; Drees, Betty M; Hamilton, James J; Recker, Robert R; Deng, Hong-Wen
2008-01-01
Areal BMD (aBMD) and areal bone size (ABS) are biologically correlated traits and are each important determinants of bone strength and risk of fractures. Studies showed that aBMD and ABS are genetically correlated, indicating that they may share some common genetic factors, which, however, are largely unknown. To study the genetic factors influencing both aBMD and ABS, bivariate whole genome linkage analyses were conducted for aBMD-ABS at the femoral neck (FN), lumbar spine (LS), and ultradistal (UD)-forearm in a large sample of 451 white pedigrees made up of 4498 individuals. We detected significant linkage on chromosome Xq27 (LOD = 4.89) for LS aBMD-ABS. In addition, we detected suggestive linkages at 20q11 (LOD = 3.65) and Xp11 (LOD = 2.96) for FN aBMD-ABS; at 12p11 (LOD = 3.39) and 17q21 (LOD = 2.94) for LS aBMD-ABS; and at 5q23 (LOD = 3.54), 7p15 (LOD = 3.45), Xq27 (LOD = 2.93), and 12p11 (LOD = 2.92) for UD-forearm aBMD-ABS. Subsequent discrimination analyses indicated that quantitative trait loci (QTLs) at 12p11 and 17q21 may have pleiotropic effects on aBMD and ABS. This study identified several genomic regions that may contain QTLs important for both aBMD and ABS. Further endeavors are necessary to follow these regions to eventually pinpoint the genetic variants affecting bone strength and risk of fractures. PMID:18597637
A view on coupled cluster perturbation theory using a bivariational Lagrangian formulation.
Kristensen, Kasper; Eriksen, Janus J; Matthews, Devin A; Olsen, Jeppe; Jørgensen, Poul
2016-02-14
We consider two distinct coupled cluster (CC) perturbation series that both expand the difference between the energies of the CCSD (CC with single and double excitations) and CCSDT (CC with single, double, and triple excitations) models in orders of the Møller-Plesset fluctuation potential. We initially introduce the E-CCSD(T-n) series, in which the CCSD amplitude equations are satisfied at the expansion point, and compare it to the recently developed CCSD(T-n) series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)], in which not only the CCSD amplitude, but also the CCSD multiplier equations are satisfied at the expansion point. The computational scaling is similar for the two series, and both are term-wise size extensive with a formal convergence towards the CCSDT target energy. However, the two series are different, and the CCSD(T-n) series is found to exhibit a more rapid convergence up through the series, which we trace back to the fact that more information at the expansion point is utilized than for the E-CCSD(T-n) series. The present analysis can be generalized to any perturbation expansion representing the difference between a parent CC model and a higher-level target CC model. In general, we demonstrate that, whenever the parent parameters depend upon the perturbation operator, a perturbation expansion of the CC energy (where only parent amplitudes are used) differs from a perturbation expansion of the CC Lagrangian (where both parent amplitudes and parent multipliers are used). For the latter case, the bivariational Lagrangian formulation becomes more than a convenient mathematical tool, since it facilitates a different and faster convergent perturbation series than the simpler energy-based expansion. PMID:26874478
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit
2015-01-01
Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571
Maadooliat, Mehdi; Huang, Jianhua Z.
2013-01-01
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence–structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu.edu/∼madoliat/LagSVD) that can be used to produce informative animations. PMID:22926831
The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.
... cancer statistics across the world. U.S. Cancer Mortality Trends The best indicator of progress against cancer is ... the number of cancer survivors has increased. These trends show that progress is being made against the ...
NASA Astrophysics Data System (ADS)
Hermann, Claudine
Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.
2013-01-01
Background Prevalence of exercise-induced bronchoconstriction (EIB) is high in elite athletes, especially after many years training in cold and dry air conditions. The primary treatment of EIB is inhaling a short-acting beta-2-agonist such as salbutamol. However, professional speed skaters also inhale nebulized isotonic saline or tap water before and after a race or intense training. The use of nebulized isotonic saline or tap water to prevent EIB has not been studied before, raising questions about safety and efficacy. The aim of this study is to analyze the acute effect of nebulized isotonic saline or salbutamol on EIB in elite speed skaters following a1,500-meter race. Methods This randomized controlled trial compares single dose treatment of 1 mg nebulized salbutamol in 4 mL of isotonic saline, or with 5 mL of isotonic saline. A minimum of 13 participants will be allocated in each treatment group. Participants should be between 18 and 35 years of age and able to skate 1,500 m in less than 2 min 10 s (women) or 2 min 05 s (men). Repeated measurements of spirometry, forced oscillation technique, and electromyography will be performed before and after an official 1,500-m race. Primary outcome of the study is the difference in fall in FEV1 after exercise in the different treatment groups. The trial is currently enrolling participants. Discussion Elite athletes run the risk of pulmonary inflammation and remodeling as a consequence of their frequent exercise, and thus increased ventilation in cold and dry environments. Although inhalation of nebulized isotonic saline is commonplace, no study has ever investigated the safety or efficacy of this treatment. Trial registration This trial protocol was registered with the Dutch trial registration for clinical trials under number NTR3550 PMID:23837574
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, W. W.; Huang, G. H.; Huang, K.; Li, Y. P.; Kong, X. M.
2016-07-01
In this study, a bivariate hydrologic risk framework is proposed based on a coupled entropy-copula method. In the proposed risk analysis framework, bivariate flood frequency would be analyzed for different flood variable pairs (i.e., flood peak-volume, flood peak-duration, flood volume-duration). The marginal distributions of flood peak, volume, and duration are quantified through both parametric (i.e., gamma, general extreme value (GEV), and lognormal distributions) and nonparametric (i.e., entropy) approaches. The joint probabilities of flood peak-volume, peak-duration, and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period to reflect the interactive effects of flood variables on the final hydrologic risk values. The proposed method is applied to the risk analysis for the Xiangxi River in the Three Gorges Reservoir area, China. The results indicate the entropy method performs best in quantifying the distribution of flood duration. Bivariate hydrologic risk would then be generated to characterize the impacts of flood volume and duration on the occurrence of a flood. The results suggest that the bivariate risk for flood peak-volume would not decrease significantly for the flood volume less than 1000 m3/s. Moreover, a flood in the Xiangxi River may last at least 5 days without significant decrease of the bivariate risk for flood peak-duration.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
ERIC Educational Resources Information Center
Chicot, Katie; Holmes, Hilary
2012-01-01
The use, and misuse, of statistics is commonplace, yet in the printed format data representations can be either over simplified, supposedly for impact, or so complex as to lead to boredom, supposedly for completeness and accuracy. In this article the link to the video clip shows how dynamic visual representations can enliven and enhance the…
ERIC Educational Resources Information Center
Catley, Alan
2007-01-01
Following the announcement last year that there will be no more math coursework assessment at General Certificate of Secondary Education (GCSE), teachers will in the future be able to devote more time to preparing learners for formal examinations. One of the key things that the author has learned when teaching statistics is that it makes for far…
Gajic-Veljanoski, Olga; Cheung, Angela M; Bayoumi, Ahmed M; Tomlinson, George
2016-05-30
Bivariate random-effects meta-analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random-effects meta-analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous-binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step-by-step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta-analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate-level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was 'partially' complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26553369
McGovern, Mark E.; Bärnighausen, Till; Marra, Giampiero; Radice, Rosalba
2015-01-01
Background Heckman-type selection models have been used to control HIV prevalence estimates for selection bias, when participation in HIV testing and HIV status are correlated after controlling for observed variables. These models typically rely on the strong assumption that the error terms in the participation and the outcome equations that comprise the model are distributed as bivariate normal. Methods We introduce a novel approach for relaxing the bivariate normality assumption in selection models using non-linear copula functions. We apply this method to estimating HIV prevalence and new confidence intervals (CI) in the 2007 Zambian Demographic and Health Survey (DHS), using interviewer identity as the selection variable that predicts participation (consent to test) but not the outcome (HIV status). Results We show in a simulation study that selection models can generate biased results when the bivariate normality assumption is violated. In the 2007 Zambia DHS, HIV prevalence estimates are similar irrespective of the structure of the association assumed between participation and outcome. For men, we estimate a population HIV prevalence of 21% (95% = CI 16% to 25%), compared with 12% (11% to 13%) among those who consented to be tested; for women, the corresponding figures are 19% (13% to 24%) and 16% (15% to 17%). Conclusions Copula approaches to Heckman-type selection models are a useful addition to the methodological toolkit of HIV epidemiology, and of epidemiology in general. We develop the use of this approach to systematically evaluate the robustness of HIV prevalence estimates based on selection models, both empirically and in a simulation study. PMID:25643102
NASA Technical Reports Server (NTRS)
Roelof, Edmond C.; Sibeck, David G.
1993-01-01
We present a new method for determining the shape of the magnetopause as a bivariate function of the hourly averaged solar wind dynamic pressure (p) and the north-south component of the interplanetary magnetic field (IMF) B(sub z). We represent the magnetopause (for X(sub GSE) greater than -40 R(sub E)) as an ellipsoid of revolution in solar-wind-aberrated coordinates and express the (p, B(sub z)) dependence of each of the three ellipsoid parameters as a second-order (6-term) bivariate expansion in Inp and B(sub z). We define 12 overlapping bins in a normalized dimensionless (p, B(sub z)) `control space' and fit an ellipsoid to those magnetopause crossings having (p, B(sub z)) values within each bin. We also calculate the bivariate (Inp, B(sub z)) moments to second order over each bin in control space. We can then calculate the six control-space expansion coefficients for each of the three ellipsoid parameters in configuration space. From these coefficients we can derive useful diagnosis of the magnetopause shape as joint functions of p and B(sub z): the aspect ratio of the ellipsoid's minor-to-major axes; the flank distance, radius of curvature, and flaring angle (at X(sub GSE) = 0); and the subsolar distance and radius of curvature. We confirm and quantify previous results that during periods of southward B(sub z) the subsolar magnetopause moves inward, while at X(sub GSE) = 0 the flank magnetopause moves outward and the flaring angle increases.
The role of drop velocity in statistical spray description
NASA Technical Reports Server (NTRS)
Groeneweg, J. F.; El-Wakil, M. M.; Myers, P. S.; Uyehara, O. A.
1978-01-01
The justification for describing a spray by treating drop velocity as a random variable on an equal statistical basis with drop size was studied experimentally. A double exposure technique using fluorescent drop photography was used to make size and velocity measurements at selected locations in a steady ethanol spray formed by a swirl atomizer. The size velocity data were categorized to construct bivariate spray density functions to describe the spray immediately after formation and during downstream propagation. Bimodal density functions were formed by environmental interaction during downstream propagation. Large differences were also found between spatial mass density and mass flux size distribution at the same location.
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Bragiel, Aneta M.; Wang, Di; Pieczonka, Tomasz D.; Shono, Masayuki; Ishikawa, Yasuko
2016-01-01
Defective cellular trafficking of aquaporin-5 (AQP5) to the apical plasma membrane (APM) in salivary glands is associated with the loss of salivary fluid secretion. To examine mechanisms of α1-adrenoceptor (AR)-induced trafficking of AQP5, immunoconfocal microscopy and Western blot analysis were used to analyze AQP5 localization in parotid tissues stimulated with phenylephrine under different osmolality. Phenylephrine-induced trafficking of AQP5 to the APM and lateral plasma membrane (LPM) was mediated via the α1A-AR subtype, but not the α1B- and α1D-AR subtypes. Phenylephrine-induced trafficking of AQP5 was inhibited by ODQ and KT5823, inhibitors of nitric oxide (NO)-stimulated guanylcyclase (GC) and protein kinase (PK) G, respectively, indicating the involvement of the NO/ soluble (c) GC/PKG signaling pathway. Under isotonic conditions, phenylephrine-induced trafficking was inhibited by La3+, implying the participation of store-operated Ca2+ channel. Under hypotonic conditions, phenylephrine-induced trafficking of AQP5 to the APM was higher than that under isotonic conditions. Under non-stimulated conditions, hypotonicity-induced trafficking of AQP5 to the APM was inhibited by ruthenium red and La3+, suggesting the involvement of extracellular Ca2+ entry. Thus, α1A-AR activation induced the trafficking of AQP5 to the APM and LPM via the Ca2+/ cyclic guanosine monophosphate (cGMP)/PKG signaling pathway, which is associated with store-operated Ca2+ entry. PMID:27367668
Herrera; Herrera; López
2000-05-01
As potassium, chloride and water traverse cell membranes, the cells of stenohaline marine invertebrates should swell if exposed to sea water mixed with an isosmotic KCl solution as they do when exposed to sea water diluted with water. To test this hypothesis respiratory tree fragments of the holothurian Isostichopus badionotus were exposed to five isosmotic media prepared by mixing artificial sodium sea water with isosmotic (611 mmol/l) KCl solution to obtain 100, 83, 71, 60 and 50% sea water, with and without 2 mmol/l ouabain. For comparison, respiratory tree fragments were incubated in sea water diluted to the same concentrations with distilled water, with and without ouabain. Cell water contents and potassium and sodium concentrations were unaffected by KCl-dilution or ouabain in isosmotic KCl-sea water mixtures. In tissues exposed to H(2)O-diluted sea water, cell water increased osmometrically and potassium, sodium and chloride concentrations decreased with dilution; ouabain caused a decrease in potasium and an increase in sodium but no effect on chloride concentrations. The isotonicity of the isosmotic KCl solution cannot be adscribed to impermeability of the cell membrane to KCl as both ions easily traverse the cell membrane. Rather, operationally immobilized extracellular sodium ions, which electrostatically hold back anions and consequently water, together with the lack of a cellward electrochemical gradient for potassium, resulting from membrane depolarization caused by high external potassium concentration, would explain the isotonicity of isosmotic KCl solution. The high external potassium concentration also antagonizes the inhibitory effect of ouabain on the Na(+)/K(+) ATPase responsible for sodium and potassium active transport. PMID:10742500
Chu, Haitao; Nie, Lei; Chen, Yong; Huang, Yi; Sun, Wei
2012-12-01
Multivariate meta-analysis is increasingly utilised in biomedical research to combine data of multiple comparative clinical studies for evaluating drug efficacy and safety profile. When the probability of the event of interest is rare, or when the individual study sample sizes are small, a substantial proportion of studies may not have any event of interest. Conventional meta-analysis methods either exclude such studies or include them through ad hoc continuality correction by adding an arbitrary positive value to each cell of the corresponding 2 × 2 tables, which may result in less accurate conclusions. Furthermore, different continuity corrections may result in inconsistent conclusions. In this article, we discuss a bivariate Beta-binomial model derived from Sarmanov family of bivariate distributions and a bivariate generalised linear mixed effects model for binary clustered data to make valid inferences. These bivariate random effects models use all available data without ad hoc continuity corrections, and accounts for the potential correlation between treatment (or exposure) and control groups within studies naturally. We then utilise the bivariate random effects models to reanalyse two recent meta-analysis data sets. PMID:21177306
NASA Astrophysics Data System (ADS)
Frecon, Jordan; Didier, Gustavo; Pustelnik, Nelly; Abry, Patrice
2016-08-01
Self-similarity is widely considered the reference framework for modeling the scaling properties of real-world data. However, most theoretical studies and their practical use have remained univariate. Operator Fractional Brownian Motion (OfBm) was recently proposed as a multivariate model for self-similarity. Yet it has remained seldom used in applications because of serious issues that appear in the joint estimation of its numerous parameters. While the univariate fractional Brownian motion requires the estimation of two parameters only, its mere bivariate extension already involves 7 parameters which are very different in nature. The present contribution proposes a method for the full identification of bivariate OfBm (i.e., the joint estimation of all parameters) through an original formulation as a non-linear wavelet regression coupled with a custom-made Branch & Bound numerical scheme. The estimation performance (consistency and asymptotic normality) is mathematically established and numerically assessed by means of Monte Carlo experiments. The impact of the parameters defining OfBm on the estimation performance as well as the associated computational costs are also thoroughly investigated.
NASA Astrophysics Data System (ADS)
Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie
2016-01-01
It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.
Brassey, Charlotte A.; Maidment, Susannah C. R.; Barrett, Paul M.
2015-01-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082–2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses. PMID:25740841
Gao, P; Li, M; Tian, Q B; Liu, Dian-Wu
2012-01-01
Serum markers are needed to be developed to specifically diagnose Hepatocellular carcinoma (HCC). Des-γ-carboxy prothrombin (DCP) is a promising tool with limited expense and widely accessibility, but the reported results have been controversial. In order to review the performance of DCP for the diagnosis of HCC, the meta-analysis was performed. After a systematic review of relevant studies, the sensitivity, specificity, positive and negative likelihood ratios (PLR and NLR, respectively) were pooled using a bivariate meta-analysis. Potential between-study heterogeneity was explored by meta-regression model. The post-test probability and the likelihood ratio scattergram to evaluate clinical usefulness were calculated. Based on literature review of 20 publications, the overall sensitivity, specificity, PLR and NLR of DCP for the detection of HCC were 67% (95%CI, 58%-74%), 92% (95%CI, 88%-94%), 7.9 (95%CI, 5.6-11.2) and 0.36 (95%CI, 0.29-0.46), respectively. The area under the bivariate summary receiving operating characteristics curve was 0.89 (95%CI, 0.85-0.92). Significant heterogeneity was present. In conclusion, the major role of DCP is the moderate confirmation of HCC. More prospective studies of DCP are needed in future. PMID:22248272
Brassey, Charlotte A; Maidment, Susannah C R; Barrett, Paul M
2015-03-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082-2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses. PMID:25740841
1986-01-01
Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces. PMID:12178831
Critical Evaluation of Internet Resources for Teaching Trend and Variability in Bivariate Data
ERIC Educational Resources Information Center
Forster, Pat
2007-01-01
A search on the Internet for resources for teaching statistics yields multiple sites with data sets, projects, worksheets, applets, and software. Often these are made available without information on how they might benefit learning. This paper addresses potential benefits from resources that target trend and variability relationships in bivariate…
Roelof, E.C.; Sibeck, D.G.
1993-12-01
The authors present a new method for determining the shape of the magnetopause as a bivariate function of the hourly averaged solar wind dynamic pressure (p) and the north-south component of the interplanetary magnetic field (IMF) B{sub z}. They represent the magnetopause (for X{sub GSE}>{minus}40R{sub E}) as an ellipsoid of revolution in solar-wind-aberrated coordinates and express the (p, B{sub z}) dependence of each of the three ellipsoid parameters as a second-order (6-term) bivariate expansion in lnp and B{sub z}. The authors define 12 overlapping bins in a normalized dimensionless (p,B{sub z}) {open_quotes}control space{close_quotes} and fit an ellipsoid to those magnetopause crossings having (p,B{sub z}) values within each bin. They also calculate the bivariate (lnp, B{sub z}) moments to second order over each bin in control space. They can then calculate the six control-space expansion coefficients for each of the three ellipsoid parameters in configuration space. From these coefficients they can derive useful diagnostics of the magnetopause shape as joint functions of p and B{sub z}: the aspect ratio of the ellipsoid`s minor-to-major axes the flank distance radius of curvature, and flaring angle (at X{sub GSE}=0); and the subsolar distance and radius of curvature. The authors confirm and quantify previous results that during periods of southward B{sub z} the subsolar magnetopause moves inward, while at X{sub GSE}=0 the flank magnetopause moves outward and the flaring angle increases. These changes are most pronounced during periods of low pressure, wherein all have a dependence on B{sub z} that is stronger and functionally different for B{sub z} southward as compared to B{sub z} northward. In contrast, all these changes are much less sensitive to IMF B{sub z} at the highest pressures. 44 refs., 22 figs., 6 tabs.
Li, Chao ..; Singh, Vijay P.; Mishra, Ashok K.
2013-02-06
This paper presents an improved brivariate mixed distribution, which is capable of modeling the dependence of daily rainfall from two distinct sources (e.g., rainfall from two stations, two consecutive days, or two instruments such as satellite and rain gauge). The distribution couples an existing framework for building a bivariate mixed distribution, the theory of copulae and a hybrid marginal distribution. Contributions of the improved distribution are twofold. One is the appropriate selection of the bivariate dependence structure from a wider admissible choice (10 candidate copula families). The other is the introduction of a marginal distribution capable of better representing low to moderate values as well as extremes of daily rainfall. Among several applications of the improved distribution, particularly presented here is its utility for single-site daily rainfall simulation. Rather than simulating rainfall occurrences and amounts separately, the developed generator unifies the two processes by generalizing daily rainfall as a Markov process with autocorrelation described by the improved bivariate mixed distribution. The generator is first tested on a sample station in Texas. Results reveal that the simulated and observed sequences are in good agreement with respect to essential characteristics. Then, extensive simulation experiments are carried out to compare the developed generator with three other alternative models: the conventional two-state Markov chain generator, the transition probability matrix model and the semi-parametric Markov chain model with kernel density estimation for rainfall amounts. Analyses establish that overall the developed generator is capable of reproducing characteristics of historical extreme rainfall events and is apt at extrapolating rare values beyond the upper range of available observed data. Moreover, it automatically captures the persistence of rainfall amounts on consecutive wet days in a relatively natural and easy way
NASA Astrophysics Data System (ADS)
Gong, Maozhen
Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.
Hens, N; Wienke, A; Aerts, M; Molenberghs, G
2009-09-30
Frailty models are often used to study the individual heterogeneity in multivariate survival analysis. Whereas the shared frailty model is widely applied, the correlated frailty model has gained attention because it elevates the restriction of unobserved factors to act similar within clusters. Estimating frailty models is not straightforward due to various types of censoring. In this paper, we study the behavior of the bivariate-correlated gamma frailty model for type I interval-censored data, better known as current status data. We show that applying a shared rather than a correlated frailty model to cross-sectionally collected serological data on hepatitis A and B leads to biased estimates for the baseline hazard and variance parameters. PMID:19591117
NASA Astrophysics Data System (ADS)
Mohsin, Muhammad; Kazianka, Hannes; Pilz, Jürgen
2013-04-01
Modeling the acidity in rainfall at certain locations is a complex task because of different environmental conditions for different rainfall regimes and the large variability in the covariates involved. In this paper, concentration of acidity and major ions in the rainfall in UK is analyzed by assuming a bivariate pseudo-Gamma distribution. The model parameters are estimated by using the maximum likelihood method and the goodness of fit is checked. Furthermore, the non-informative Jeffreys prior for the distribution parameters is derived and a hybrid Gibbs sampling strategy is proposed to sample the corresponding posterior for conducting an objective Bayesian analysis. Finally, related quantities such as the deposition flux density are derived where the general pattern of the observed data appears to follow the fitted densities closely.
NASA Astrophysics Data System (ADS)
Haruna, Taichi; Nakajima, Kohei
2013-05-01
Transfer entropy is a measure of the magnitude and the direction of information flow between jointly distributed stochastic processes. In recent years, its permutation analogues are considered in the literature to estimate the transfer entropy by counting the number of occurrences of orderings of values, not the values themselves. It has been suggested that the method of permutation is easy to implement, computationally low cost and robust to noise when applying to real world time series data. In this paper, we initiate a theoretical treatment of the corresponding rates. In particular, we consider the transfer entropy rate and its permutation analogue, the symbolic transfer entropy rate, and show that they are equal for any bivariate finite-alphabet stationary ergodic Markov process. This result is an illustration of the duality method introduced in [T. Haruna, K. Nakajima, Physica D 240, 1370 (2011)]. We also discuss the relationship among the transfer entropy rate, the time-delayed mutual information rate and their permutation analogues.
Carr, J.R.; Deng, E.D.
1987-01-01
Indicator atcokriging is an alternative to disjunctive kriging for estimation of spatial distributions. One way to determine which of these techniques is more accurate for estimation of spatial distributions is to apply each to a particular type of data. A procedure is developed for evaluation of disjunctive kriging and indicator atcokriging for such an application. Application of this procedure to earthquake ground motion data found disjunctive kriging to be at least as accurate as indicator atcokriging for estimation of spatial distributions for peak horizontal acceleration. Indicator atcokriging was superior for all other types of earthquake ground motion data.
NASA Astrophysics Data System (ADS)
Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah
2013-11-01
Decision tree (DT) machine learning algorithm was used to map the flood susceptible areas in Kelantan, Malaysia.We used an ensemble frequency ratio (FR) and logistic regression (LR) model in order to overcome weak points of the LR.Combined method of FR and LR was used to map the susceptible areas in Kelantan, Malaysia.Results of both methods were compared and their efficiency was assessed.Most influencing conditioning factors on flooding were recognized.
ERIC Educational Resources Information Center
King, Daniel W.; King, Lynda A.; McArdle, John J.; Shalev, Arieh Y.; Doron-LaMarca, Susan
2009-01-01
Depression and posttraumatic stress disorder (PTSD) are highly comorbid conditions that may arise following exposure to psychological trauma. This study examined their temporal sequencing and mutual influence using bivariate latent difference score structural equation modeling. Longitudinal data from 182 emergency room patients revealed level of…
Daly, P.J.; Zhang, C.T.; Bhattacharyya, P.
1996-11-01
Large multidetector {gamma}-ray arrays, which can separate the prompt {gamma}-ray cascades within a single fission product nucleus (of moderate yield) from the bulk of prompt {gamma}-rays, has now opened new prospects for studies of yrast excitations in {sup 132}Sn and the few valence particle nuclei around it. Measurements were performed at Eurogam II using a {sup 248}Cm source. This paper features the results for the two and three valence proton N=82 isotones {sup 134}Te and {sup 135}I which exhibit simple clearcut excitation modes, resembling {sup 210}Po and {sup 211}At, their well studied N=126 counterparts in the {sup 208}Pb region. A search was made for new {sup 135}I transitions by setting a single coincidence gate on 1134 keV {gamma}-rays; strong 288, 572, 690, 725, 1661, 1695, and 2247 keV coincident {gamma}-rays were identified as {sup 135}I {gamma}-rays. In summary, yrast excitations to above 5.5 MeV excitation energy in the 2- and 3-proton nuclei {sup 134}Te and {sup 135}I have been established and interpreted with help of shell model calculations using empirical nucleon-nucleon interactions. This opens possibilities for exploring simple excitation modes in the {sup 132}Sn region under conditions comparable with but not identical to those in the well-studied {sup 208}Pb region.
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Bernauer, E. M.; Erti, A. C.
1995-01-01
Submaximal exercise (61+3% peak VO2) metabolism was measured before (AC day-2) and on bed rest day 4, 11, and 25 in 19 healthy men (32-42 yr) allocated into no exercise (NOE, N=5) control, and isotonic exercise (ITE, N=7)and isokinetic exercise (IKE, N=7) training groups. Training was conducting supine for two 30-min periods/d for 6 d/wk: ITE was 60-90% peak VO2: IKE was peak knee flexion-extension at 100 deg/s. Supine submaximal exercise 102 decreased significantly (*p<0.05) by 10.3%, with ITE and by 7.3%* with IKE; similar to the submaximal cardiac output (Q) change of -14.5%* (ITE) and -203%* (IKE), but different from change in peak VO2 (+1.4% with ITE and - 10.2%, with IKE) and plasma volume of -3.7% (ITE) and - 18.0% * (IKE). Thus, reduction of submaximal V02 during prolonged bed rest appears to respond to submaximal Q but is not related to change in peak VO2 or plasma volume.
Moreno Júnior, H; Cezareti, M L; Piçarro, I C; Barros Neto, T L; Kasinski, N; Martinez Filho, E E; Saragoça, M A
1995-10-01
Intense physical training through isotonic exercises has controversial effects in individuals with moderate to severe hypertension. In this study, normotensive Wistar rats and rats with renovascular hypertension (Goldblatt II) were subjected to intense physical exercise involving two 50-min swimming sessions per day for a period of 12 weeks. At the end of the study, we evaluated the effect of training on arterial pressure, the capacity for aerobic work and cardiac function. Our results demonstrate that intense physical training has no effect on the arterial blood pressure of normotensive rats or of animals with moderate renovascular hypertension. Hypertensive animals with cardiac hypertrophy require a greater period of training in order to attain the same capacity for aerobic work as normotensive rats. This difference may result from an inability of the former animals to increase cardiac compliance, thereby impeding more extensive usage of the Frank-Starling mechanism to subsequently increase the systolic cardiac performance. Cardiac hypertrophy induced by exercise did not summate with that induced by arterial hypertension. Physical exercise normalized the end-diastolic left ventricular pressure in hypertensive animals without any corresponding increase in the compliance of the chamber. The first derivative of left ventricular pulse pressure (+/- dP/dt) was greater in the hypertensive trained group than in the hypertensive sedentary rats. These observations suggest that a systolic dysfunction of the left ventricle involving an elevated residual volume secondary to arterial hypertension may be corrected by physical exercise such as swimming. PMID:7584822
Cosmetic Plastic Surgery Statistics
2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...
Lemonias, Jenna J.; Schiminovich, David; Catinella, Barbara; Heckman, Timothy M.; Moran, Sean M.
2013-10-20
We present the bivariate neutral atomic hydrogen (H I)-stellar mass function (HISMF) φ(M{sub H{sub I}}, M{sub *}) for massive (log M{sub *}/M{sub ☉} \\gt 10) galaxies derived from a sample of 480 local (0.025 < z < 0.050) galaxies observed in H I at Arecibo as part of the GALEX Arecibo SDSS Survey. We fit six different models to the HISMF and find that a Schechter function that extends down to a 1% H I gas fraction, with an additional fractional contribution below that limit, is the best parameterization of the HISMF. We calculate Ω{sub H{sub I,{sub M{sub *>10{sup 1}{sup 0}}}}} and find that massive galaxies contribute 41% of the H I density in the local universe. In addition to the binned HISMF, we derive a continuous bivariate fit, which reveals that the Schechter parameters only vary weakly with stellar mass: M{sub H{sub I}{sup *}}, the characteristic H I mass, scales as M{sub *}{sup 0.39}; α, the slope of the HISMF at moderate H I masses, scales as M{sub *}{sup 0.07}; and f, the fraction of galaxies with H I gas fraction greater than 1%, scales as M{sub *}{sup -0.24}. The variation of f with stellar mass should be a strong constraint for numerical simulations. To understand the physical mechanisms that produce the shape of the HISMF, we redefine the parameters of the Schechter function as explicit functions of stellar mass and star formation rate (SFR) to produce a trivariate fit. This analysis reveals strong trends with SFR. While M{sub H{sub I}{sup *}} varies weakly with stellar mass and SFR (M{sub H{sub I}{sup *}} ∝ M{sub *}{sup 0.22}, M{sub H{sub I}{sup *}} ∝ SFR{sup –0.03}), α is a stronger function of both stellar mass and especially SFR (α ∝ M{sub *}{sup 0.47}, α ∝ SFR{sup 0.95}). The HISMF is a crucial tool that can be used to constrain cosmological galaxy simulations, test observational predictions of the H I content of populations of galaxies, and identify galaxies whose properties deviate from average trends.
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Ertl, A. C.; Bernauer, E. M.
1996-01-01
BACKGROUND: Maintaining intermediary metabolism is necessary for the health and well-being of astronauts on long-duration spaceflights. While peak oxygen uptake (VO2) is consistently decreased during prolonged bed rest, submaximal VO2 is either unchanged or decreased. METHODS: Submaximal exercise metabolism (61 +/- 3% peak VO2) was measured during ambulation (AMB day-2) and on bed rest days 4, 11, and 25 in 19 healthy men (32-42 yr) allocated into no exercise (NOE, N = 5) control, and isotonic exercise (ITE, N = 7) and isokinetic exercise (IKE, N = 7) training groups. Exercise training was conducted supine for two 30-min periods per day for 6 d per week: ITE training was intermittent at 60-90% peak VO2; IKE training was 10 sets of 5 repetitions of peak knee flexion-extension force at a velocity of 100 degrees s-1. Cardiac output was measured with the indirect Fick CO2 method, and plasma volume with Evans blue dye dilution. RESULTS: Supine submaximal exercise VO2 decreased significantly (*p < 0.05) by 10.3%* with ITE and by 7.3%* with IKE; similar to the submaximal cardiac output decrease of 14.5%* (ITE) and 20.3%* (IKE), but different from change in peak VO2 (+1.4% with ITE and -10.2%* with IKE) and decrease in plasma volume of -3.7% (ITE) and -18.0%* (IKE). Reduction of submaximal VO2 during bed rest correlated 0.79 (p < 0.01) with submaximal Qc, but was not related to change in peak VO2 or plasma volume. CONCLUSION: Reduction in submaximal oxygen uptake during prolonged bed rest is related to decrease in exercise but not resting cardiac output; perturbations in active skeletal muscle metabolism may be involved.
Beauchamp, Jonathan P; Cesarini, David; Johannesson, Magnus; Lindqvist, Erik; Apicella, Coren
2011-03-01
A robust positive correlation between height and intelligence, as measured by IQ tests, has been established in the literature. This paper makes several contributions toward establishing the causes of this association. First, we extend the standard bivariate ACE model to account for assortative mating. The more general theoretical framework provides several key insights, including formulas to decompose a cross-trait genetic correlation into components attributable to assortative mating and pleiotropy and to decompose a cross-trait within-family correlation. Second, we use a large dataset of male twins drawn from Swedish conscription records and examine how well genetic and environmental factors explain the association between (i) height and intelligence and (ii) height and military aptitude, a professional psychologist's assessment of a conscript's ability to deal with wartime stress. For both traits, we find suggestive evidence of a shared genetic architecture with height, but we demonstrate that point estimates are very sensitive to assumed degrees of assortative mating. Third, we report a significant within-family correlation between height and intelligence (p^ = 0.10), suggesting that pleiotropy might be at play. PMID:20603722
Pugach, Oksana; Hedeker, Donald; Mermelstein, Robin
2014-01-01
A bivariate mixed-effects location-scale model is proposed for estimation of means, variances, and covariances of two continuous outcomes measured concurrently in time and repeatedly over subjects. Modeling the two outcomes jointly allows examination of BS and WS association between the outcomes and whether the associations are related to covariates. The variance-covariance matrices of the BS and WS effects are modeled in terms of covariates, explaining BS and WS heterogeneity. The proposed model relaxes assumptions on the homogeneity of the within-subject (WS) and between-subject (BS) variances. Furthermore, the WS variance models are extended by including random scale effects. Data from a natural history study on adolescent smoking are used for illustration. 461 students, from 9th and 10th grades, reported on their mood at random prompts during seven consecutive days. This resulted in 14,105 prompts with an average of 30 responses per student. The two outcomes considered were a subject’s positive affect and a measure of how tired and bored they were feeling. Results showed that the WS association of the outcomes was negative and significantly associated with several covariates. The BS and WS variances were heterogeneous for both outcomes, and the variance of the random scale effects were significantly different from zero. PMID:25541578
Masangwi, Salule; Ferguson, Neil; Grimason, Anthony; Morse, Tracy; Kazembe, Lawrence
2015-07-01
Developing countries face a huge burden of infectious diseases, a number of which co-exist. This paper estimates the pattern and variation of malaria and diarrhea coexistence in Chikhwawa, a district in Southern Malawi using bivariate multilevel modelling with Bayesian estimation. A probit link was employed to examine hierarchically built data from a survey of individuals (n = 6,727) nested within households (n = 1,380) nested within communities (n = 33). Results show significant malaria [σ²μ₁=0.901 (95% CI:0.746,1.056)] and diarrhea [σ²μ₂=1.009 (95% CI:0.860,1.158)] variations with a strong correlation between them [r(¹,²)μ=0.565] at household level. There are significant malaria [σ²ν₁=0.053 (95% CI: 0.018,0.088)] and diarrhea [σ²ν₂=0.099(95% CI : 0.030,0.168) ] variations at community level but with a small correlation [r(¹,²) ν=0.124] between them. There is also significant correlation between malaria and diarrhea at individual level [ r(¹,²) e=0.241]. These results suggest a close association between reported malaria-like illness and diarrheal illness especially at household and individual levels in Southern Malawi. PMID:26197332
Masangwi, Salule; Ferguson, Neil; Grimason, Anthony; Morse, Tracy; Kazembe, Lawrence
2015-01-01
Developing countries face a huge burden of infectious diseases, a number of which co-exist. This paper estimates the pattern and variation of malaria and diarrhea coexistence in Chikhwawa, a district in Southern Malawi using bivariate multilevel modelling with Bayesian estimation. A probit link was employed to examine hierarchically built data from a survey of individuals (n = 6,727) nested within households (n = 1,380) nested within communities (n = 33). Results show significant malaria [σu12=0.901 (95% CI:0.746,1.056)] and diarrhea [σu22=1.009 (95% CI:0.860,1.158)] variations with a strong correlation between them [ru(1,2)=0.565] at household level. There are significant malaria [σv12=0.053(95% CI:0.018,0.088)] and diarrhea [σv22=0.099(95% CI:0.030,0.168)] variations at community level but with a small correlation [rv(1,2)=0.124] between them. There is also significant correlation between malaria and diarrhea at individual level [re(1,2)=0.241]. These results suggest a close association between reported malaria-like illness and diarrheal illness especially at household and individual levels in Southern Malawi. PMID:26197332
Hao, Jingcan; Wang, Wenyu; Wen, Yan; Xiao, Xiao; He, Awen; Guo, Xiong; Yang, Tielin; Liu, Xiaogang; Shen, Hui; Chen, Xiangding; Tian, Qing; Deng, Hong-Wen; Zhang, Feng
2016-01-01
Kashin-Beck disease (KBD) is a chronic osteoarthropathy, which manifests as joint deformities and growth retardation. Only a few genetic studies of growth retardation associated with the KBD have been carried out by now. In this study, we conducted a two-stage bivariate genome-wide association study (BGWAS) of the KBD using joint deformities and body height as study phenotypes, totally involving 2,417 study subjects. Articular cartilage specimens from 8 subjects were collected for immunohistochemistry. In the BGWAS, ADAM12 gene achieved the most significant association (rs1278300 p-value = 9.25 × 10−9) with the KBD. Replication study observed significant association signal at rs1278300 (p-value = 0.007) and rs1710287 (p-value = 0.002) of ADAM12 after Bonferroni correction. Immunohistochemistry revealed significantly decreased expression level of ADAM12 protein in the KBD articular cartilage (average positive chondrocyte rate = 47.59 ± 7.79%) compared to healthy articular cartilage (average positive chondrocyte rate = 64.73 ± 5.05%). Our results suggest that ADAM12 gene is a novel susceptibility gene underlying both joint destruction and growth retardation of the KBD. PMID:27545300
Lindquist, Martin A.; Xu, Yuting; Nebel, Mary Beth; Caffo, Brain S.
2014-01-01
To date, most functional Magnetic Resonance Imaging (fMRI) studies have assumed that the functional connectivity (FC) between time series from distinct brain regions is constant across time. However, recently, there has been increased interest in quantifying possible dynamic changes in FC during fMRI experiments, as it is thought this may provide insight into the fundamental workings of brain networks. In this work we focus on the specific problem of estimating the dynamic behavior of pair-wise correlations between time courses extracted from two different regions of the brain. We critique the commonly used sliding-windows technique, and discuss some alternative methods used to model volatility in the finance literature that could also prove useful in the neuroimaging setting. In particular, we focus on the Dynamic Conditional Correlation (DCC) model, which provides a model-based approach towards estimating dynamic correlations. We investigate the properties of several techniques in a series of simulation studies and find that DCC achieves the best overall balance between sensitivity and specificity in detecting dynamic changes in correlations. We also investigate its scalability beyond the bivariate case to demonstrate its utility for studying dynamic correlations between more than two brain regions. Finally, we illustrate its performance in an application to test-retest resting state fMRI data. PMID:24993894
Shaikh, Masood Ali
2016-04-01
Statistical tests help infer meaningful conclusions from studies conducted and data collected. This descriptive study analyzed the type of statistical tests used and the statistical software utilized for analysis reported in the original articles published in 2014 by the three Medline-indexed journals of Pakistan. Cumulatively, 466 original articles were published in 2014. The most frequently reported statistical tests for original articles by all three journals were bivariate parametric and non-parametric tests i.e. involving comparisons between two groups e.g. Chi-square test, t-test, and various types of correlations. Cumulatively, 201 (43.1%) articles used these tests. SPSS was the primary choice for statistical analysis, as it was exclusively used in 374 (80.3%) original articles. There has been a substantial increase in the number of articles published, and in the sophistication of statistical tests used in the articles published in the Pakistani Medline indexed journals in 2014, compared to 2007. PMID:27122277
Predict! Teaching Statistics Using Informational Statistical Inference
ERIC Educational Resources Information Center
Makar, Katie
2013-01-01
Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…
Statistics Poker: Reinforcing Basic Statistical Concepts
ERIC Educational Resources Information Center
Leech, Nancy L.
2008-01-01
Learning basic statistical concepts does not need to be tedious or dry; it can be fun and interesting through cooperative learning in the small-group activity of Statistics Poker. This article describes a teaching approach for reinforcing basic statistical concepts that can help students who have high anxiety and makes learning and reinforcing…
Liu, S. H.; Hamilton, J. H.; Ramayya, A. V.; Hwang, J. K.; Covello, A.; Itaco, N.; Gargano, A.; Stone, N. J.; Daniel, A. V.; Luo, Y. X.; Rasmussen, J. O.; Ter-Akopian, G. M.; Zhu, S. J.; Ma, W. C.
2010-01-15
The g factor of the 15/2{sup -} state in {sup 137}Xe was measured for the first time by using a newly developed technique for measuring angular correlations with Gammasphere. Spins and parities were assigned to several levels in the N=83 isotones {sup 135}Te, {sup 136}I, {sup 137}Xe, and {sup 138}Cs. The calculated g factor in the shell-model frame is in good agreement with the measured one in the present work. Shell-model calculations also support our spin-parity assignments.
Neuroendocrine Tumor: Statistics
... Tumor > Neuroendocrine Tumor - Statistics Request Permissions Neuroendocrine Tumor - Statistics Approved by the Cancer.Net Editorial Board , 04/ ... the body. It is important to remember that statistics on how many people survive this type of ...
Monotone Bivariate Interpolation Code
Energy Science and Technology Software Center (ESTSC)
1992-08-27
BIMOND is a FORTRAN 77 subroutine for piecewise bicubic interpolation to data on a rectangular mesh, which reproduces the monotonicity of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting.
Statistical Reference Datasets
National Institute of Standards and Technology Data Gateway
Statistical Reference Datasets (Web, free access) The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.
Kiss, Huba J.; Németh, János
2015-01-01
Dry eye complaints are ranked as the most frequent symptoms of patients visiting ophthalmologists. Conjunctivochalasis is a common dry eye disorder, which can cause an unstable tear film and ocular discomfort. The severe conjunctivochalasis characterized by high LId-Parallel COnjunctival Folds (LIPCOF) degree usually requires surgical intervention, where a conservative therapy would be highly desirable. Here we examined the efficacy of a preservative-free, inorganic salt-free unit-dose artificial tear, called Conheal containing isotonic glycerol and 0.015% sodium hyaluronate in a prospective, unmasked, self-controlled study involving 20 patients. The regular use of the glycerol/hyaluronate artificial tear in three months caused a significant improvement in the recorded parameters. Conjunctivochalasis decreased from a mean LIPCOF degree of 2.9±0.4 on both eyes to 1.4±0.6 on the right (median decrease of -2 points, 95% CI from -2.0 to -1.0), and to 1.4±0.7 on the left eye (median decrease of -1 points, 95% CI from -2.0 to -1.0) (p<0.001 for both sides). The tear film breakup time (TFBUT) lengthened from 4.8±1.9 seconds on both eyes to 5.9±2.3 seconds (mean increase of 1.1 seconds, 95% CI from 0.2 to 2.0) and 5.7±1.8 seconds (mean increase of 0.9 seconds, 95% CI from 0.3 to 1.5) on the right and left eyes, respectively (pright eyes = 0.020, pleft eyes = 0.004). The corneal lissamine staining (Oxford Scheme grade) was reduced from 1.3±0.6 on the right and 1.4±0.6 on the left eye significantly (p<0.001) to 0.3±0.4 and 0.2±0.4 on the right and the left eyes. The Ocular Surface Disease Index (OSDI) questionnaire score indicating the subjective complaints of the patients also decreased from a mean value of 36.2±25.3 to 15.6±16.7 (p<0.001). In this study, the artificial tear, Conheal decreased the grade of the conjunctivochalasis significantly after one month of regular use already, from the LIPCOF degree 3, considered as indication of conjunctival surgery, to a
Pitfalls in statistical landslide susceptibility modelling
NASA Astrophysics Data System (ADS)
Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut
2010-05-01
lack of explanatory information in the chosen set of predictor variables, the model residuals need to be checked for spatial auto¬correlation. Therefore, we calculate spline correlograms. In addition to this, we investigate partial dependency plots and bivariate interactions plots considering possible interactions between predictors to improve model interpretation. Aiming at presenting this toolbox for model quality assessment, we investigate the influence of strategies in the construction of training datasets for statistical models on model quality.
... Research AMIGAS Fighting Cervical Cancer Worldwide Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Skin Vaginal and Vulvar Cancer Home Uterine Cancer Statistics Language: English Español (Spanish) Recommend on Facebook Tweet ...
Mathematical and statistical analysis
NASA Technical Reports Server (NTRS)
Houston, A. Glen
1988-01-01
The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.
Minnesota Health Statistics 1988.
ERIC Educational Resources Information Center
Minnesota State Dept. of Health, St. Paul.
This document comprises the 1988 annual statistical report of the Minnesota Center for Health Statistics. After introductory technical notes on changes in format, sources of data, and geographic allocation of vital events, an overview is provided of vital health statistics in all areas. Thereafter, separate sections of the report provide tables…
ERIC Educational Resources Information Center
Lenard, Christopher; McCarthy, Sally; Mills, Terence
2014-01-01
There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…
ERIC Educational Resources Information Center
Strasser, Nora
2007-01-01
Avoiding statistical mistakes is important for educators at all levels. Basic concepts will help you to avoid making mistakes using statistics and to look at data with a critical eye. Statistical data is used at educational institutions for many purposes. It can be used to support budget requests, changes in educational philosophy, changes to…
Statistical quality management
NASA Astrophysics Data System (ADS)
Vanderlaan, Paul
1992-10-01
Some aspects of statistical quality management are discussed. Quality has to be defined as a concrete, measurable quantity. The concepts of Total Quality Management (TQM), Statistical Process Control (SPC), and inspection are explained. In most cases SPC is better than inspection. It can be concluded that statistics has great possibilities in the field of TQM.
Statistical Conclusion Validity: Some Common Threats and Simple Remedies
García-Pérez, Miguel A.
2012-01-01
The ultimate goal of research is to produce dependable knowledge or to provide the evidence that may guide practical decisions. Statistical conclusion validity (SCV) holds when the conclusions of a research study are founded on an adequate analysis of the data, generally meaning that adequate statistical methods are used whose small-sample behavior is accurate, besides being logically capable of providing an answer to the research question. Compared to the three other traditional aspects of research validity (external validity, internal validity, and construct validity), interest in SCV has recently grown on evidence that inadequate data analyses are sometimes carried out which yield conclusions that a proper analysis of the data would not have supported. This paper discusses evidence of three common threats to SCV that arise from widespread recommendations or practices in data analysis, namely, the use of repeated testing and optional stopping without control of Type-I error rates, the recommendation to check the assumptions of statistical tests, and the use of regression whenever a bivariate relation or the equivalence between two variables is studied. For each of these threats, examples are presented and alternative practices that safeguard SCV are discussed. Educational and editorial changes that may improve the SCV of published research are also discussed. PMID:22952465
Statistical Analysis of Single-Trial Granger Causality Spectra
Brovelli, Andrea
2012-01-01
Granger causality analysis is becoming central for the analysis of interactions between neural populations and oscillatory networks. However, it is currently unclear whether single-trial estimates of Granger causality spectra can be used reliably to assess directional influence. We addressed this issue by combining single-trial Granger causality spectra with statistical inference based on general linear models. The approach was assessed on synthetic and neurophysiological data. Synthetic bivariate data was generated using two autoregressive processes with unidirectional coupling. We simulated two hypothetical experimental conditions: the first mimicked a constant and unidirectional coupling, whereas the second modelled a linear increase in coupling across trials. The statistical analysis of single-trial Granger causality spectra, based on t-tests and linear regression, successfully recovered the underlying pattern of directional influence. In addition, we characterised the minimum number of trials and coupling strengths required for significant detection of directionality. Finally, we demonstrated the relevance for neurophysiology by analysing two local field potentials (LFPs) simultaneously recorded from the prefrontal and premotor cortices of a macaque monkey performing a conditional visuomotor task. Our results suggest that the combination of single-trial Granger causality spectra and statistical inference provides a valuable tool for the analysis of large-scale cortical networks and brain connectivity. PMID:22649482
Explorations in statistics: statistical facets of reproducibility.
Curran-Everett, Douglas
2016-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science. PMID:27231259
Predicting radiotherapy outcomes using statistical learning techniques*
El Naqa, Issam; Bradley, Jeffrey D; Lindsay, Patricia E; Hope, Andrew J; Deasy, Joseph O
2013-01-01
Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for ‘generalizabilty’ validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model
Predicting radiotherapy outcomes using statistical learning techniques
NASA Astrophysics Data System (ADS)
El Naqa, Issam; Bradley, Jeffrey D.; Lindsay, Patricia E.; Hope, Andrew J.; Deasy, Joseph O.
2009-09-01
Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for 'generalizabilty' validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model
NASA Astrophysics Data System (ADS)
Grasso, M.; Anguiano, M.
2015-11-01
Neutron 2 p and 1 f spin-orbit splittings were recently measured in the isotones 37S and 35Si by (d ,p ) transfer reactions. Values were reported by using the major fragments of the states. An important reduction of the p splitting was observed, from 37S to 35Si , associated with a strong modification of the spin-orbit potential in the central region of the nucleus 35Si . We analyze 2 p and 1 f neutron spin-orbit splittings in the N =20 isotones 40Ca,36S , and 34Si . We employ several Skyrme and Gogny interactions to reliably isolate pure spin-orbit and tensor-induced contributions, within the mean-field approximation. We use interactions (i) without the tensor force, (ii) with the tensor force and with tensor parameters adjusted on top of existing parametrizations, nd (iii) with the tensor force and with tensor and spin-orbit parameters adjusted simultaneously on top of existing parametrizations. We predict in cases (ii) and (iii) a non-negligible reduction of both p and f splittings, associated with neutron-proton tensor effects, from 40Ca to 36S . The two splittings are further decreased for the three types of interactions, going from 36S to 34Si . This reduction is produced by the spin-orbit force and is not affected by tensor-induced contributions. For both reductions, from 40Ca to 36S and from 36S to 34Si , we predict in all cases that the modification is more pronounced for p than for f splittings. The measurement of the centroids for neutron 2 p and 1 f states in the nuclei 36S and 34Si would be interesting to validate this prediction experimentally. We show the importance of using interactions of type (iii), because they provide p and f splittings in the nucleus 40Ca which are in agreement with the corresponding experimental values.
NASA Astrophysics Data System (ADS)
Schieve, William C.; Horwitz, Lawrence P.
2009-04-01
1. Foundations of quantum statistical mechanics; 2. Elementary examples; 3. Quantum statistical master equation; 4. Quantum kinetic equations; 5. Quantum irreversibility; 6. Entropy and dissipation: the microscopic theory; 7. Global equilibrium: thermostatics and the microcanonical ensemble; 8. Bose-Einstein ideal gas condensation; 9. Scaling, renormalization and the Ising model; 10. Relativistic covariant statistical mechanics of many particles; 11. Quantum optics and damping; 12. Entanglements; 13. Quantum measurement and irreversibility; 14. Quantum Langevin equation: quantum Brownian motion; 15. Linear response: fluctuation and dissipation theorems; 16. Time dependent quantum Green's functions; 17. Decay scattering; 18. Quantum statistical mechanics, extended; 19. Quantum transport with tunneling and reservoir ballistic transport; 20. Black hole thermodynamics; Appendix; Index.
Statistical distribution sampling
NASA Technical Reports Server (NTRS)
Johnson, E. S.
1975-01-01
Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.
Multivariate statistical approach to a data set of dioxin and furan contaminations in human milk
Lindstrom, G.U.M.; Sjostrom, M.; Swanson, S.E. ); Furst, P.; Kruger, C.; Meemken, H.A.; Groebel, W. )
1988-05-01
The levels of chlorinated dibenzodioxins, PCDDs, and dibenzofurans, PCDFs, in human milk have been of great concern after the discovery of the toxic 2,3,7,8-substituted isomers in milk of European origin. As knowledge of environmental contamination of human breast milk increases, questions will continue to be asked about possible risks from breast feeding. Before any recommendations can be made, there must be knowledge of contaminant levels in mothers' breast milk. Researchers have measured PCB and 17 different dioxins and furans in human breast milk samples. To date the data has only been analyzed by univariate and bivariate statistical methods. However to extract as much information as possible from this data set, multivariate statistical methods must be used. Here the authors present a multivariate analysis where the relationships between the polychlorinated compounds and the personalia of the mothers have been studied. For the data analysis partial least squares (PLS) modelling has been used.
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
Multidimensional Visual Statistical Learning
ERIC Educational Resources Information Center
Turk-Browne, Nicholas B.; Isola, Phillip J.; Scholl, Brian J.; Treat, Teresa A.
2008-01-01
Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not…
ERIC Educational Resources Information Center
Huberty, Carl J.
An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect…
ERIC Educational Resources Information Center
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…
Vijayaraj, Veeraraghavan; Cheriyadat, Anil M; Bhaduri, Budhendra L; Vatsavai, Raju; Bright, Eddie A
2008-01-01
Statistical properties of high-resolution overhead images representing different land use categories are analyzed using various local and global statistical image properties based on the shape of the power spectrum, image gradient distributions, edge co-occurrence, and inter-scale wavelet coefficient distributions. The analysis was performed on a database of high-resolution (1 meter) overhead images representing a multitude of different downtown, suburban, commercial, agricultural and wooded exemplars. Various statistical properties relating to these image categories and their relationship are discussed. The categorical variations in power spectrum contour shapes, the unique gradient distribution characteristics of wooded categories, the similarity in edge co-occurrence statistics for overhead and natural images, and the unique edge co-occurrence statistics of downtown categories are presented in this work. Though previous work on natural image statistics has showed some of the unique characteristics for different categories, the relationships for overhead images are not well understood. The statistical properties of natural images were used in previous studies to develop prior image models, to predict and index objects in a scene and to improve computer vision models. The results from our research findings can be used to augment and adapt computer vision algorithms that rely on prior image statistics to process overhead images, calibrate the performance of overhead image analysis algorithms, and derive features for better discrimination of overhead image categories.
Understanding Undergraduate Statistical Anxiety
ERIC Educational Resources Information Center
McKim, Courtney
2014-01-01
The purpose of this study was to understand undergraduate students' views of statistics. Results reveal that students with less anxiety have a higher interest in statistics and also believe in their ability to perform well in the course. Also students who have a more positive attitude about the class tend to have a higher belief in their…
Croarkin, M. Carroll
2001-01-01
For more than 50 years, the Statistical Engineering Division (SED) has been instrumental in the success of a broad spectrum of metrology projects at NBS/NIST. This paper highlights fundamental contributions of NBS/NIST statisticians to statistics and to measurement science and technology. Published methods developed by SED staff, especially during the early years, endure as cornerstones of statistics not only in metrology and standards applications, but as data-analytic resources used across all disciplines. The history of statistics at NBS/NIST began with the formation of what is now the SED. Examples from the first five decades of the SED illustrate the critical role of the division in the successful resolution of a few of the highly visible, and sometimes controversial, statistical studies of national importance. A review of the history of major early publications of the division on statistical methods, design of experiments, and error analysis and uncertainty is followed by a survey of several thematic areas. The accompanying examples illustrate the importance of SED in the history of statistics, measurements and standards: calibration and measurement assurance, interlaboratory tests, development of measurement methods, Standard Reference Materials, statistical computing, and dissemination of measurement technology. A brief look forward sketches the expanding opportunity and demand for SED statisticians created by current trends in research and development at NIST.
ERIC Educational Resources Information Center
Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain
2004-01-01
Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses…
ERIC Educational Resources Information Center
Council of Ontario Universities, Toronto.
Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…
Introduction to Statistical Physics
NASA Astrophysics Data System (ADS)
Casquilho, João Paulo; Ivo Cortez Teixeira, Paulo
2014-12-01
Preface; 1. Random walks; 2. Review of thermodynamics; 3. The postulates of statistical physics. Thermodynamic equilibrium; 4. Statistical thermodynamics – developments and applications; 5. The classical ideal gas; 6. The quantum ideal gas; 7. Magnetism; 8. The Ising model; 9. Liquid crystals; 10. Phase transitions and critical phenomena; 11. Irreversible processes; Appendixes; Index.
Reform in Statistical Education
ERIC Educational Resources Information Center
Huck, Schuyler W.
2007-01-01
Two questions are considered in this article: (a) What should professionals in school psychology do in an effort to stay current with developments in applied statistics? (b) What should they do with their existing knowledge to move from surface understanding of statistics to deep understanding? Written for school psychologists who have completed…
Statistical Mapping by Computer.
ERIC Educational Resources Information Center
Utano, Jack J.
The function of a statistical map is to provide readers with a visual impression of the data so that they may be able to identify any geographic characteristics of the displayed phenomena. The increasingly important role played by the computer in the production of statistical maps is manifested by the varied examples of computer maps in recent…
The purpose of the Disability Statistics Center is to produce and disseminate statistical information on disability and the status of people with disabilities in American society and to establish and monitor indicators of how conditions are changing over time to meet their health...
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. PMID:26466186
Januszyk, Michael; Gurtner, Geoffrey C
2011-01-01
The scope of biomedical research has expanded rapidly during the past several decades, and statistical analysis has become increasingly necessary to understand the meaning of large and diverse quantities of raw data. As such, a familiarity with this lexicon is essential for critical appraisal of medical literature. This article attempts to provide a practical overview of medical statistics, with an emphasis on the selection, application, and interpretation of specific tests. This includes a brief review of statistical theory and its nomenclature, particularly with regard to the classification of variables. A discussion of descriptive methods for data presentation is then provided, followed by an overview of statistical inference and significance analysis, and detailed treatment of specific statistical tests and guidelines for their interpretation. PMID:21200241
Selden, J.R.; Dolbeare, F.; Clair, J.H.; DeLuca, J.G. Lawrence Livermore National Lab., CA )
1993-01-01
An in vitro FCM DNA repair assay has been developed. Cultures of rat hapatocytes were exposed to a battery of chemicals for 18-20 hrs. Compounds were selected based upon both their genotoxic and carcinogenic characteristics. Evidence of DNA repair was noted by detecting BrdUrd uptake. Low intensity BrdUrd-FITC fluorescent signals from repairing cells were visualized by use of linear uni- or bi-variate histograms. This assay's sensitivity was directly compared to that of autoradiography. Results revealed the following: (1) A high correlation exists between genotoxicity and DNA repair; (2) The results of these assays were generally in agreement; and, (3) The sensitivity of this FCM DNA repair assay compares favorably to that of autoradiography. Thus, this assay provides a sensitive and reliable means of identifying agents which induce DNA repair in mammalian cells.
The statistical analysis of multivariate serological frequency data.
Reyment, Richard A
2005-11-01
Data occurring in the form of frequencies are common in genetics-for example, in serology. Examples are provided by the AB0 group, the Rhesus group, and also DNA data. The statistical analysis of tables of frequencies is carried out using the available methods of multivariate analysis with usually three principal aims. One of these is to seek meaningful relationships between the components of a data set, the second is to examine relationships between populations from which the data have been obtained, the third is to bring about a reduction in dimensionality. This latter aim is usually realized by means of bivariate scatter diagrams using scores computed from a multivariate analysis. The multivariate statistical analysis of tables of frequencies cannot safely be carried out by standard multivariate procedures because they represent compositions and are therefore embedded in simplex space, a subspace of full space. Appropriate procedures for simplex space are compared and contrasted with simple standard methods of multivariate analysis ("raw" principal component analysis). The study shows that the differences between a log-ratio model and a simple logarithmic transformation of proportions may not be very great, particularly as regards graphical ordinations, but important discrepancies do occur. The divergencies between logarithmically based analyses and raw data are, however, great. Published data on Rhesus alleles observed for Italian populations are used to exemplify the subject. PMID:16024067
Bain, Robert E.S.; Cronk, Ryan; Wright, Jim A.; Bartram, Jamie
2015-01-01
Background Access to safe drinking water is essential for health. Monitoring access to drinking water focuses on water supply type at the source, but there is limited evidence on whether quality differences at the source persist in water stored in the household. Objectives We assessed the extent of fecal contamination at the source and in household stored water (HSW) and explored the relationship between contamination at each sampling point and water supply type. Methods We performed a bivariate random-effects meta-analysis of 45 studies, identified through a systematic review, that reported either the proportion of samples free of fecal indicator bacteria and/or individual sample bacteria counts for source and HSW, disaggregated by supply type. Results Water quality deteriorated substantially between source and stored water. The mean percentage of contaminated samples (noncompliance) at the source was 46% (95% CI: 33, 60%), whereas mean noncompliance in HSW was 75% (95% CI: 64, 84%). Water supply type was significantly associated with noncompliance at the source (p < 0.001) and in HSW (p = 0.03). Source water (OR = 0.2; 95% CI: 0.1, 0.5) and HSW (OR = 0.3; 95% CI: 0.2, 0.8) from piped supplies had significantly lower odds of contamination compared with non-piped water, potentially due to residual chlorine. Conclusions Piped water is less likely to be contaminated compared with other water supply types at both the source and in HSW. A focus on upgrading water services to piped supplies may help improve safety, including for those drinking stored water. Citation Shields KF, Bain RE, Cronk R, Wright JA, Bartram J. 2015. Association of supply type with fecal contamination of source water and household stored drinking water in developing countries: a bivariate meta-analysis. Environ Health Perspect 123:1222–1231; http://dx.doi.org/10.1289/ehp.1409002 PMID:25956006
Ector, Hugo
2010-12-01
I still remember my first book on statistics: "Elementary statistics with applications in medicine and the biological sciences" by Frederick E. Croxton. For me, it has been the start of pursuing understanding statistics in daily life and in medical practice. It was the first volume in a long row of books. In his introduction, Croxton pretends that"nearly everyone involved in any aspect of medicine needs to have some knowledge of statistics". The reality is that for many clinicians, statistics are limited to a "P < 0.05 = ok". I do not blame my colleagues who omit the paragraph on statistical methods. They have never had the opportunity to learn concise and clear descriptions of the key features. I have experienced how some authors can describe difficult methods in a well understandable language. Others fail completely. As a teacher, I tell my students that life is impossible without a basic knowledge of statistics. This feeling has resulted in an annual seminar of 90 minutes. This tutorial is the summary of this seminar. It is a summary and a transcription of the best pages I have detected. PMID:21302664
Winters, Ryan; Winters, Andrew; Amedee, Ronald G.
2010-01-01
The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter. PMID:21603381
Statistics of football dynamics
NASA Astrophysics Data System (ADS)
Mendes, R. S.; Malacarne, L. C.; Anteneodo, C.
2007-06-01
We investigate the dynamics of football matches. Our goal is to characterize statistically the temporal sequence of ball movements in this collective sport game, searching for traits of complex behavior. Data were collected over a variety of matches in South American, European and World championships throughout 2005 and 2006. We show that the statistics of ball touches presents power-law tails and can be described by q-gamma distributions. To explain such behavior we propose a model that provides information on the characteristics of football dynamics. Furthermore, we discuss the statistics of duration of out-of-play intervals, not directly related to the previous scenario.
Playing at Statistical Mechanics
ERIC Educational Resources Information Center
Clark, Paul M.; And Others
1974-01-01
Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)
Cooperative Learning in Statistics.
ERIC Educational Resources Information Center
Keeler, Carolyn M.; And Others
1994-01-01
Formal use of cooperative learning techniques proved effective in improving student performance and retention in a freshman level statistics course. Lectures interspersed with group activities proved effective in increasing conceptual understanding and overall class performance. (11 references) (Author)
Understanding Solar Flare Statistics
NASA Astrophysics Data System (ADS)
Wheatland, M. S.
2005-12-01
A review is presented of work aimed at understanding solar flare statistics, with emphasis on the well known flare power-law size distribution. Although avalanche models are perhaps the favoured model to describe flare statistics, their physical basis is unclear, and they are divorced from developing ideas in large-scale reconnection theory. An alternative model, aimed at reconciling large-scale reconnection models with solar flare statistics, is revisited. The solar flare waiting-time distribution has also attracted recent attention. Observed waiting-time distributions are described, together with what they might tell us about the flare phenomenon. Finally, a practical application of flare statistics to flare prediction is described in detail, including the results of a year of automated (web-based) predictions from the method.
Titanic: A Statistical Exploration.
ERIC Educational Resources Information Center
Takis, Sandra L.
1999-01-01
Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)
... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...
NASA Astrophysics Data System (ADS)
Grégoire, G.
2016-05-01
This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.
Tuberculosis Data and Statistics
... Organization Chart Advisory Groups Federal TB Task Force Data and Statistics Language: English Español (Spanish) Recommend on ... United States publication. PDF [6 MB] Interactive TB Data Tool Online Tuberculosis Information System (OTIS) OTIS is ...
NASA Astrophysics Data System (ADS)
Richfield, Jon; bookfeller
2016-07-01
In reply to Ralph Kenna and Pádraig Mac Carron's feature article “Maths meets myths” in which they describe how they are using techniques from statistical physics to characterize the societies depicted in ancient Icelandic sagas.
... facts and statistics here include brain and central nervous system tumors (including spinal cord, pituitary and pineal gland ... U.S. living with a primary brain and central nervous system tumor. This year, nearly 17,000 people will ...
Purposeful Statistical Investigations
ERIC Educational Resources Information Center
Day, Lorraine
2014-01-01
Lorraine Day provides us with a great range of statistical investigations using various resources such as maths300 and TinkerPlots. Each of the investigations link mathematics to students' lives and provide engaging and meaningful contexts for mathematical inquiry.
Oakland, J.S.
1986-01-01
Addressing the increasing importance for firms to have a thorough knowledge of statistically based quality control procedures, this book presents the fundamentals of statistical process control (SPC) in a non-mathematical, practical way. It provides real-life examples and data drawn from a wide variety of industries. The foundations of good quality management and process control, and control of conformance and consistency during production are given. Offers clear guidance to those who wish to understand and implement modern SPC techniques.
Statistical Physics of Particles
NASA Astrophysics Data System (ADS)
Kardar, Mehran
2006-06-01
Statistical physics has its origins in attempts to describe the thermal properties of matter in terms of its constituent particles, and has played a fundamental role in the development of quantum mechanics. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook introduces the central concepts and tools of statistical physics. It contains a chapter on probability and related issues such as the central limit theorem and information theory, and covers interacting particles, with an extensive description of the van der Waals equation and its derivation by mean field approximation. It also contains an integrated set of problems, with solutions to selected problems at the end of the book. It will be invaluable for graduate and advanced undergraduate courses in statistical physics. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873420. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 89 exercises, with solutions to selected problems Contains chapters on probability and interacting particles Ideal for graduate courses in Statistical Mechanics
NASA Astrophysics Data System (ADS)
Kardar, Mehran
2006-06-01
While many scientists are familiar with fractals, fewer are familiar with the concepts of scale-invariance and universality which underly the ubiquity of their shapes. These properties may emerge from the collective behaviour of simple fundamental constituents, and are studied using statistical field theories. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook demonstrates how such theories are formulated and studied. Perturbation theory, exact solutions, renormalization groups, and other tools are employed to demonstrate the emergence of scale invariance and universality, and the non-equilibrium dynamics of interfaces and directed paths in random media are discussed. Ideal for advanced graduate courses in statistical physics, it contains an integrated set of problems, with solutions to selected problems at the end of the book. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873413. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 65 exercises, with solutions to selected problems Features a thorough introduction to the methods of Statistical Field theory Ideal for graduate courses in Statistical Physics
NASA Astrophysics Data System (ADS)
Mika, Janos; Dobi-Wantuch, Ildiko; Tóth-Tarjányi Zsuzsanna, Zsuzsanna; Molnar, Zsofia; Kitti Csabai, Edina; Razsi, Andras
2014-05-01
Spatial interpolation and mapping of renewable energy resources is an important task of potential estimation in case of atmospheric renewable energy sources. Its first steps, concerning global radiation measured at horizontal surfaces (not on optimally tilted ones) and near-surface wind speed measured at 10 m height above the surface (not at 60-120 m of contemporary wind-turbines). Based on these standard meteorological observations, experts of the Hungarian Meteorological Service elaborated a series of digital maps with 0.1 x 0.1 deg resolution compiled in the framework of the CarpatClim Project (www.carpatclim-eu.org). The grid-point values are based on homogenised data using MASH theory and software (SZENTIMREY, 1999). The interpolation has been performed by the MISH theory and software (SZENTIMREY and BIHARI, 2006). The study tackles the solar and wind energy in four aspects. Firstly, a trial for validation of the gridded data is provided by comparison a single station, Eger for 2001-2010 (global radiation) and 1996-2010 (wind speed cube). The horizontal distance between the closest grid-point and the station is less than 1 km. Gridded global radiation data perform very well comparing to the observations, based on various statistical parameters of the distribution, whereas for the wind speed cube, used as indicator of available energy, there is a considerable bias between the two sets of data. Secondly, the annual cycles of the area-mean global radiation and wind-speed are presented, based on the gridded data of a selected ca. 50x50 km2 (6x8 grid-points) region. Both the averages and standard deviations of the diurnal mean values are presented for the 1981-2010 reference period. Presenting the maps of the distribution within this area is our third aspect, considering both averages and standard deviations. Finally the point-wise trends are drown for both energy sources in the single grid-point used in the aspect one in 1981-2010, and also in the nearby located
Statistical Physics of Fracture
Alava, Mikko; Nukala, Phani K; Zapperi, Stefano
2006-05-01
Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances in computer power have enabled considerable progress in the understanding of such models. Among these partly still controversial issues, are the scaling and size-effects in material strength and accumulated damage, the statistics of avalanches or bursts of microfailures, and the morphology of the crack surface. Here we present an overview of the results obtained with lattice models for fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches.
Statistical Downscaling: Lessons Learned
NASA Astrophysics Data System (ADS)
Walton, D.; Hall, A. D.; Sun, F.
2013-12-01
In this study, we examine ways to improve statistical downscaling of general circulation model (GCM) output. Why do we downscale GCM output? GCMs have low resolution, so they cannot represent local dynamics and topographic effects that cause spatial heterogeneity in the regional climate change signal. Statistical downscaling recovers fine-scale information by utilizing relationships between the large-scale and fine-scale signals to bridge this gap. In theory, the downscaled climate change signal is more credible and accurate than its GCM counterpart, but in practice, there may be little improvement. Here, we tackle the practical problems that arise in statistical downscaling, using temperature change over the Los Angeles region as a test case. This region is an ideal place to apply downscaling since its complex topography and shoreline are poorly simulated by GCMs. By comparing two popular statistical downscaling methods and one dynamical downscaling method, we identify issues with statistically downscaled climate change signals and develop ways to fix them. We focus on scale mismatch, domain of influence, and other problems - many of which users may be unaware of - and discuss practical solutions.
Suite versus composite statistics
Balsillie, J.H.; Tanner, W.F.
1999-01-01
Suite and composite methodologies, two statistically valid approaches for producing statistical descriptive measures, are investigated for sample groups representing a probability distribution where, in addition, each sample is probability distribution. Suite and composite means (first moment measures) are always equivalent. Composite standard deviations (second moment measures) are always larger than suite standard deviations. Suite and composite values for higher moment measures have more complex relationships. Very seldom, however, are they equivalent, and they normally yield statistically significant but different results. Multiple samples are preferable to single samples (including composites) because they permit the investigator to examine sample-to-sample variability. These and other relationships for suite and composite probability distribution analyses are investigated and reported using granulometric data.
Candidate Assembly Statistical Evaluation
Energy Science and Technology Software Center (ESTSC)
1998-07-15
The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that amore » significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.« less
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Analogies for Understanding Statistics
ERIC Educational Resources Information Center
Hocquette, Jean-Francois
2004-01-01
This article describes a simple way to explain the limitations of statistics to scientists and students to avoid the publication of misleading conclusions. Biologists examine their results extremely critically and carefully choose the appropriate analytic methods depending on their scientific objectives. However, no such close attention is usually…
Statistical methods in microbiology.
Ilstrup, D M
1990-01-01
Statistical methodology is viewed by the average laboratory scientist, or physician, sometimes with fear and trepidation, occasionally with loathing, and seldom with fondness. Statistics may never be loved by the medical community, but it does not have to be hated by them. It is true that statistical science is sometimes highly mathematical, always philosophical, and occasionally obtuse, but for the majority of medical studies it can be made palatable. The goal of this article has been to outline a finite set of methods of analysis that investigators should choose based on the nature of the variable being studied and the design of the experiment. The reader is encouraged to seek the advice of a professional statistician when there is any doubt about the appropriate method of analysis. A statistician can also help the investigator with problems that have nothing to do with statistical tests, such as quality control, choice of response variable and comparison groups, randomization, and blinding of assessment of response variables. PMID:2200604
Statistical Energy Analysis Program
NASA Technical Reports Server (NTRS)
Ferebee, R. C.; Trudell, R. W.; Yano, L. I.; Nygaard, S. I.
1985-01-01
Statistical Energy Analysis (SEA) is powerful tool for estimating highfrequency vibration spectra of complex structural systems and incorporated into computer program. Basic SEA analysis procedure divided into three steps: Idealization, parameter generation, and problem solution. SEA computer program written in FORTRAN V for batch execution.
Statistical Significance Testing.
ERIC Educational Resources Information Center
McLean, James E., Ed.; Kaufman, Alan S., Ed.
1998-01-01
The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…
Education Statistics Quarterly, 2003.
ERIC Educational Resources Information Center
Marenus, Barbara; Burns, Shelley; Fowler, William; Greene, Wilma; Knepper, Paula; Kolstad, Andrew; McMillen Seastrom, Marilyn; Scott, Leslie
2003-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…
Spitball Scatterplots in Statistics
ERIC Educational Resources Information Center
Wagaman, John C.
2012-01-01
This paper describes an active learning idea that I have used in my applied statistics class as a first lesson in correlation and regression. Students propel spitballs from various standing distances from the target and use the recorded data to determine if the spitball accuracy is associated with standing distance and review the algebra of lines…
Lack of Statistical Significance
ERIC Educational Resources Information Center
Kehle, Thomas J.; Bray, Melissa A.; Chafouleas, Sandra M.; Kawano, Takuji
2007-01-01
Criticism has been leveled against the use of statistical significance testing (SST) in many disciplines. However, the field of school psychology has been largely devoid of critiques of SST. Inspection of the primary journals in school psychology indicated numerous examples of SST with nonrandom samples and/or samples of convenience. In this…
Juvenile Court Statistics - 1972.
ERIC Educational Resources Information Center
Office of Youth Development (DHEW), Washington, DC.
This report is a statistical study of juvenile court cases in 1972. The data demonstrates how the court is frequently utilized in dealing with juvenile delinquency by the police as well as by other community agencies and parents. Excluded from this report are the ordinary traffic cases handled by juvenile court. The data indicate that: (1) in…
Library Research and Statistics.
ERIC Educational Resources Information Center
Lynch, Mary Jo; St. Lifer, Evan; Halstead, Kent; Fox, Bette-Lee; Miller, Marilyn L.; Shontz, Marilyn L.
2001-01-01
These nine articles discuss research and statistics on libraries and librarianship, including libraries in the United States, Canada, and Mexico; acquisition expenditures in public, academic, special, and government libraries; price indexes; state rankings of public library data; library buildings; expenditures in school library media centers; and…
Foundations of Statistical Seismology
NASA Astrophysics Data System (ADS)
Vere-Jones, David
2010-06-01
A brief account is given of the principles of stochastic modelling in seismology, with special regard to the role and development of stochastic models for seismicity. Stochastic models are seen as arising in a hierarchy of roles in seismology, as in other scientific disciplines. At their simplest, they provide a convenient descriptive tool for summarizing data patterns; in engineering and other applications, they provide a practical way of bridging the gap between the detailed modelling of a complex system, and the need to fit models to limited data; at the most fundamental level they arise as a basic component in the modelling of earthquake phenomena, analogous to that of stochastic models in statistical mechanics or turbulence theory. As an emerging subdiscipline, statistical seismology includes elements of all of these. The scope for the development of stochastic models depends crucially on the quantity and quality of the available data. The availability of extensive, high-quality catalogues and other relevant data lies behind the recent explosion of interest in statistical seismology. At just such a stage, it seems important to review the underlying principles on which statistical modelling is based, and that is the main purpose of the present paper.
Graduate Statistics: Student Attitudes
ERIC Educational Resources Information Center
Kennedy, Robert L.; Broadston, Pamela M.
2004-01-01
This study investigated the attitudes toward statistics of graduate students who used a computer program as part of the instruction, which allowed for an individualized, self-paced, student-centered, activity-based course. The twelve sections involved in this study were offered in the spring and fall 2001, spring and fall 2002, spring and fall…
Geopositional Statistical Methods
NASA Technical Reports Server (NTRS)
Ross, Kenton
2006-01-01
RMSE based methods distort circular error estimates (up to 50% overestimation). The empirical approach is the only statistically unbiased estimator offered. Ager modification to Shultz approach is nearly unbiased, but cumbersome. All methods hover around 20% uncertainty (@ 95% confidence) for low geopositional bias error estimates. This requires careful consideration in assessment of higher accuracy products.
Statistical Reasoning over Lunch
ERIC Educational Resources Information Center
Selmer, Sarah J.; Bolyard, Johnna J.; Rye, James A.
2011-01-01
Students in the 21st century are exposed daily to a staggering amount of numerically infused media. In this era of abundant numeric data, students must be able to engage in sound statistical reasoning when making life decisions after exposure to varied information. The context of nutrition can be used to engage upper elementary and middle school…
Fractional statistics and confinement
NASA Astrophysics Data System (ADS)
Gaete, P.; Wotzasek, C.
2005-02-01
It is shown that a pointlike composite having charge and magnetic moment displays a confining potential for the static interaction while simultaneously obeying fractional statistics in a pure gauge theory in three dimensions, without a Chern-Simons term. This result is distinct from the Maxwell-Chern-Simons theory that shows a screening nature for the potential.
Statistics for Learning Genetics
ERIC Educational Resources Information Center
Charles, Abigail Sheena
2012-01-01
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,…
ERIC Educational Resources Information Center
Akram, Muhammad; Siddiqui, Asim Jamal; Yasmeen, Farah
2004-01-01
In order to learn the concept of statistical techniques one needs to run real experiments that generate reliable data. In practice, the data from some well-defined process or system is very costly and time consuming. It is difficult to run real experiments during the teaching period in the university. To overcome these difficulties, statisticians…
Statistics for Learning Genetics
NASA Astrophysics Data System (ADS)
Charles, Abigail Sheena
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless
NASA Technical Reports Server (NTRS)
Fionda, E.; Falls, M. J.; Westwater, E. R.
1991-01-01
Two seasons (December 1987 to February 1988 and July 1988 to September 1988) of thermal emission measurements, taken by a multichannel ground-based microwave radiometer, are used to derive single-station zenith attenuation statistics at 20.6 and 31.65 GHz. For the summer period, statistics are also derived at 52.85 GHz. In addition, data from two radiometers located 50 km apart are used to derive two-station attenuation diversity statistics at 20.6 and 31.65 GHz. The multichannel radiometer was operated at Denver, Colorado, U.S. and the dual-channel device was operated at Platteville, Colorado. The diversity statistics are presented by cumulative distributions and by bivariate frequency distributions. The frequency distributions are analyzed when either one or both stations have liquid clouds.
Yeh, C-Y; Chen, L-J; Ku, P-W; Chen, C-M
2015-01-01
The increasing prevalence of obesity in children and adolescents has become one of the most important public health issues around the world. Lack of physical activity is a risk factor for obesity, while being obese could reduce the likelihood of participating in physical activity. Failing to account for the endogeneity between obesity and physical activity would result in biased estimation. This study investigates the relationship between overweight and physical activity by taking endogeneity into consideration. It develops an endogenous bivariate probit model estimated by the maximum likelihood method. The data included 4008 boys and 4197 girls in the 5th-9th grades in Taiwan in 2007-2008. The relationship between overweight and physical activity is significantly negative in the endogenous model, but insignificant in the comparative exogenous model. This endogenous relationship presents a vicious circle in which lower levels of physical activity lead to overweight, while those who are already overweight engage in less physical activity. The results not only reveal the importance of endogenous treatment, but also demonstrate the robust negative relationship between these two factors. An emphasis should be put on overweight and obese children and adolescents in order to break the vicious circle. Promotion of physical activity by appropriate counselling programmes and peer support could be effective in reducing the prevalence of obesity in children and adolescents. PMID:24423649
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in
Information geometry of Bayesian statistics
NASA Astrophysics Data System (ADS)
Matsuzoe, Hiroshi
2015-01-01
A survey of geometry of Bayesian statistics is given. From the viewpoint of differential geometry, a prior distribution in Bayesian statistics is regarded as a volume element on a statistical model. In this paper, properties of Bayesian estimators are studied by applying equiaffine structures of statistical manifolds. In addition, geometry of anomalous statistics is also studied. Deformed expectations and deformed independeces are important in anomalous statistics. After summarizing geometry of such deformed structues, a generalization of maximum likelihood method is given. A suitable weight on a parameter space is important in Bayesian statistics, whereas a suitable weight on a sample space is important in anomalous statistics.
Winds aloft statistical analysis in support of day of launch Shuttle systems evaluation
NASA Technical Reports Server (NTRS)
Adelfang, S. I.; Smith, O. E.; Batts, G. W.; Hill, C. K.
1988-01-01
In connection with the development of the Meteorological Interactive Data Display System (MIDDS) for utilization by the Launch Systems Evaluation Advisory Team (LSEAT), requirements have been established to expand the pre-launch analysis of winds aloft for the Space Shuttle. Statistical analyses developed for the system include: comparison of pre-launch wind component profiles to wind component extremes at each altitude calculated from launch site historical data; conditional probability ellipses for wind vectors at a future time given the wind vector at an initial time; comparison of observed extreme wind shear and associated wind speed with launch site historical data utilizing the bivariate extreme value (Gumbel) distribution; estimation of extremes of wind speed or wind shear at a future time given the extremes of either variable at an initial time, utilizing the conditional extreme value distribution; power spectrum analysis for tracking wind perturbation energy in sequential pre-launch Jimsphere wind profiles.
Warren, W.G.; Boehm, M.; Link, D.
1992-01-01
A statistical methodology for exploring the relationships between elevation and precipitation chemistry is outlined and illustrated. The methodology utilizes maximum likelihood estimates and likelihood ratio tests with contour ellipses of assumed bivariate lognormal distributions used to assist in interpretation. The approach was illustrated using 12 NADP/NTN sites located in six study areas in the Wyoming and Colorado Rockies. These sites are part of the Rocky Mountain Deposition Monitoring Project (RMDMP), which was initiated in 1986 to investigate the relationships between elevation and the chemistry of precipitation. The results indicate differences in sulfate concentrations between airsheds, between snow and rain, and between higher and lower elevations. In general, sulfate concentrations in snow are greater at lower elevations and this difference is independent of concentration. A similar relationship for rain was not well established. In addition there is evidence that, overall, the sulfate concentrations differed between the six study areas, although pairwise differences were not always significant.
Statistical methods for astronomical data with upper limits. II - Correlation and regression
NASA Technical Reports Server (NTRS)
Isobe, T.; Feigelson, E. D.; Nelson, P. I.
1986-01-01
Statistical methods for calculating correlations and regressions in bivariate censored data where the dependent variable can have upper or lower limits are presented. Cox's regression and the generalization of Kendall's rank correlation coefficient provide significant levels of correlations, and the EM algorithm, under the assumption of normally distributed errors, and its nonparametric analog using the Kaplan-Meier estimator, give estimates for the slope of a regression line. Monte Carlo simulations demonstrate that survival analysis is reliable in determining correlations between luminosities at different bands. Survival analysis is applied to CO emission in infrared galaxies, X-ray emission in radio galaxies, H-alpha emission in cooling cluster cores, and radio emission in Seyfert galaxies.
Statistical Inference at Work: Statistical Process Control as an Example
ERIC Educational Resources Information Center
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Fermions from classical statistics
Wetterich, C.
2010-12-15
We describe fermions in terms of a classical statistical ensemble. The states {tau} of this ensemble are characterized by a sequence of values one or zero or a corresponding set of two-level observables. Every classical probability distribution can be associated to a quantum state for fermions. If the time evolution of the classical probabilities p{sub {tau}} amounts to a rotation of the wave function q{sub {tau}}(t)={+-}{radical}(p{sub {tau}}(t)), we infer the unitary time evolution of a quantum system of fermions according to a Schroedinger equation. We establish how such classical statistical ensembles can be mapped to Grassmann functional integrals. Quantum field theories for fermions arise for a suitable time evolution of classical probabilities for generalized Ising models.
Waller, Lance A.
2008-01-01
The three papers included in this special issue represent a set of presentations in an invited session on disease ecology at the 2005 Spring Meeting of the Eastern North American Region of the International Biometric Society. The papers each address statistical estimation and inference for particular components of different disease processes and, taken together, illustrate the breadth of statistical issues arising in the study of the ecology and public health impact of disease. As an introduction, we provide a very brief overview of the area of “disease ecology”, a variety of synonyms addressing different aspects of disease ecology, and present a schematic structure illustrating general components of the underlying disease process, data collection issues, and different disciplinary perspectives ranging from microbiology to public health surveillance. PMID:19081740
Statistical evaluation of forecasts
NASA Astrophysics Data System (ADS)
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.
1979 DOE statistical symposium
Gardiner, D.A.; Truett T.
1980-09-01
The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.
Understanding British addiction statistics.
Johnson, B D
1975-01-01
The statistical data issued by the Home Office and Department of Health and Social Security are quite detailed and generally valid measures of hard core addiction in Great Britain (Judson, 1973). Since 1968, the main basis of these high quality British statistics is the routine reports filed by Drug Treatment Centres. The well-trained, experienced staff of these clinics make knowledgeable dicsions about a cleint's addiction, efficiently regulate dosage, and otherwise exert some degree of control over addicts (Judson, 1973; Johnson, 1974). The co-operation of police, courts, prison physicians, and general practitioners is also valuable in collecting data on drug addiction and convictions. Information presented in the tables above indicates that a rising problem of herion addiction between 1962 and 1967 were arrested by the introduction of the treatment clinics in 1968. Further, legally maintained heroin addiction has been reduced by almost one-third since 1968, since many herion addicts have been transferred to injectable methadone. The decline in herion prescribing and the relatively steady number of narcotics addicts has apparently occurred in the face of a continuing, and perhaps increasing, demand for heroin and other opiates. With few exceptions of a minor nature analysis of various tables suggests that the official statistics are internally consistent. There are apparently few "hidden" addicts, since few unknown addicts die of overdoses or are arrested by police (Lewis, 1973), although Blumberg (1974) indicates that some unknown users may exist. In addition, may opitate usersnot officially notified are known by clinic doctors as friends of addicts receiving prescriptions (Judson, 1973; Home Office, 1974). In brief, offical British drug statistics seem to be generally valid and demonstrate that heroin and perhaps methadone addiction has been well contained by the treatment clinics. PMID:1039283
Statistical Methods in Cosmology
NASA Astrophysics Data System (ADS)
Verde, L.
2010-03-01
The advent of large data-set in cosmology has meant that in the past 10 or 20 years our knowledge and understanding of the Universe has changed not only quantitatively but also, and most importantly, qualitatively. Cosmologists rely on data where a host of useful information is enclosed, but is encoded in a non-trivial way. The challenges in extracting this information must be overcome to make the most of a large experimental effort. Even after having converged to a standard cosmological model (the LCDM model) we should keep in mind that this model is described by 10 or more physical parameters and if we want to study deviations from it, the number of parameters is even larger. Dealing with such a high dimensional parameter space and finding parameters constraints is a challenge on itself. Cosmologists want to be able to compare and combine different data sets both for testing for possible disagreements (which could indicate new physics) and for improving parameter determinations. Finally, cosmologists in many cases want to find out, before actually doing the experiment, how much one would be able to learn from it. For all these reasons, sophisiticated statistical techniques are being employed in cosmology, and it has become crucial to know some statistical background to understand recent literature in the field. I will introduce some statistical tools that any cosmologist should know about in order to be able to understand recently published results from the analysis of cosmological data sets. I will not present a complete and rigorous introduction to statistics as there are several good books which are reported in the references. The reader should refer to those.
Guta, Madalin; Butucea, Cristina
2010-10-15
The notion of a U-statistic for an n-tuple of identical quantum systems is introduced in analogy to the classical (commutative) case: given a self-adjoint 'kernel' K acting on (C{sup d}){sup '}x{sup r} with r
Statistics in fusion experiments
NASA Astrophysics Data System (ADS)
McNeill, D. H.
1997-11-01
Since the reasons for the variability in data from plasma experiments are often unknown or uncontrollable, statistical methods must be applied. Reliable interpretation and public accountability require full data sets. Two examples of data misrepresentation at PPPL are analyzed: Te >100 eV on S-1 spheromak.(M. Yamada, Nucl. Fusion 25, 1327 (1985); reports to DoE; etc.) The reported high values (statistical artifacts of Thomson scattering measurements) were selected from a mass of data with an average of 40 eV or less. ``Correlated'' spectroscopic data were meaningless. (2) Extrapolation to Q >=0.5 for DT in TFTR.(D. Meade et al., IAEA Baltimore (1990), V. 1, p. 9; H. P. Furth, Statements to U. S. Congress (1989).) The DD yield used there was the highest through 1990 (>= 50% above average) and the DT to DD power ratio used was about twice any published value. Average DD yields and published yield ratios scale to Q<0.15 for DT, in accord with the observed performance over the last 3 1/2 years. Press reports of outlier data from TFTR have obscured the fact that the DT behavior follows from trivial scaling of the DD data. Good practice in future fusion research would have confidence intervals and other descriptive statistics accompanying reported numerical values (cf. JAMA).
Bradley, Robert K; Roberts, Adam; Smoot, Michael; Juvekar, Sudeep; Do, Jaeyoung; Dewey, Colin; Holmes, Ian; Pachter, Lior
2009-05-01
We describe a new program for the alignment of multiple biological sequences that is both statistically motivated and fast enough for problem sizes that arise in practice. Our Fast Statistical Alignment program is based on pair hidden Markov models which approximate an insertion/deletion process on a tree and uses a sequence annealing algorithm to combine the posterior probabilities estimated from these models into a multiple alignment. FSA uses its explicit statistical model to produce multiple alignments which are accompanied by estimates of the alignment accuracy and uncertainty for every column and character of the alignment--previously available only with alignment programs which use computationally-expensive Markov Chain Monte Carlo approaches--yet can align thousands of long sequences. Moreover, FSA utilizes an unsupervised query-specific learning procedure for parameter estimation which leads to improved accuracy on benchmark reference alignments in comparison to existing programs. The centroid alignment approach taken by FSA, in combination with its learning procedure, drastically reduces the amount of false-positive alignment on biological data in comparison to that given by other methods. The FSA program and a companion visualization tool for exploring uncertainty in alignments can be used via a web interface at http://orangutan.math.berkeley.edu/fsa/, and the source code is available at http://fsa.sourceforge.net/. PMID:19478997
NASA Astrophysics Data System (ADS)
Sharma, Arpita; Saikia, Ananya; Khare, Puja; Baruah, B. P.
2014-10-01
In the present investigation, 37 numbers of high sulphur tertiary coal samples from Meghalaya, India have been studied on the basis of proximate and ash analysis. Various statistical tools like Bivariant Analysis, Principal Component Analysis (PCA) and Hierarchical Clustering Analysis (HCA), and also the geochemical indicators were applied to determine the dominant detrital or authigenic affinity of the ash forming elements in these coals. The genetic interpretation of coal as well as the coal ash has been carried out based on chemical compositions of high temperature ash (HTA) by using Detrital/Authigenic Index. X-Ray Diffraction (XRD) analysis was also carried out to study the mineralogy of the studied coal ashes. Both statistical tools and geochemical indicators have confirmed the detrital nature of these coals as well as the ash forming elements.
Experimental Mathematics and Computational Statistics
Bailey, David H.; Borwein, Jonathan M.
2009-04-30
The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.
Truth, Damn Truth, and Statistics
ERIC Educational Resources Information Center
Velleman, Paul F.
2008-01-01
Statisticians and Statistics teachers often have to push back against the popular impression that Statistics teaches how to lie with data. Those who believe incorrectly that Statistics is solely a branch of Mathematics (and thus algorithmic), often see the use of judgment in Statistics as evidence that we do indeed manipulate our results. In the…
NASA Technical Reports Server (NTRS)
1994-01-01
Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
Who Needs Statistics? | Poster
You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.
NASA Technical Reports Server (NTRS)
1995-01-01
NASA Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
NASA Technical Reports Server (NTRS)
1996-01-01
This booklet of pocket statistics includes the 1996 NASA Major Launch Record, NASA Procurement, Financial, and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Luanch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
International petroleum statistics report
1995-10-01
The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report presents data on international oil production, demand, imports, exports and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Section 2 presents an oil supply/demand balance for the world, in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries.
Scott, M; Flaherty, D; Currall, J
2013-03-01
This short addition to our series on clinical statistics concerns relationships, and answering questions such as "are blood pressure and weight related?" In a later article, we will answer the more interesting question about how they might be related. This article follows on logically from the previous one dealing with categorical data, the major difference being here that we will consider two continuous variables, which naturally leads to the use of a Pearson correlation or occasionally to a Spearman rank correlation coefficient. PMID:23458641
Statistics of atmospheric correlations.
Santhanam, M S; Patra, P K
2001-07-01
For a large class of quantum systems, the statistical properties of their spectrum show remarkable agreement with random matrix predictions. Recent advances show that the scope of random matrix theory is much wider. In this work, we show that the random matrix approach can be beneficially applied to a completely different classical domain, namely, to the empirical correlation matrices obtained from the analysis of the basic atmospheric parameters that characterize the state of atmosphere. We show that the spectrum of atmospheric correlation matrices satisfy the random matrix prescription. In particular, the eigenmodes of the atmospheric empirical correlation matrices that have physical significance are marked by deviations from the eigenvector distribution. PMID:11461326
NASA Technical Reports Server (NTRS)
1993-01-01
Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
Fan, Li-Chao; Lu, Hai-Wen; Cheng, Ke-Bin; Li, Hui-Ping; Xu, Jin-Fu
2013-01-01
Background As a promising tool, PCR in bronchoalveolar lavage fluid (BALF) has not been accepted as a diagnostic criterion for PJP. Objective We undertook a systematic review of published studies to evaluate the diagnostic accuracy of PCR assays in BALF for PJP. Methods Eligible studies from PubMed, Embase and Web of Science reporting PCR assays in BALF for diagnosing PJP were identified. A bivariate meta-analysis of the method’s sensitivity, specificity, and positive and negative likelihood ratios with a 95% confidence interval (CI) were analyzed. The post-test probability was performed to evaluate clinical usefulness. A summary receiver operating characteristics (SROC) curve was used to evaluate overall performance. Subgroup analyses were carried out to analysis the potential heterogeneity. Results Sixteen studies published between 1994 and 2012 were included. The summary sensitivity and specificity values (95% CI) of PCR in BALF for diagnosis of PJP were 98.3% (91.3%–99.7%) and 91.0% (82.7%–95.5%), respectively. The positive and negative likelihood ratios were 10.894 (5.569–21.309) and 0.018 (0.003–0.099), respectively. In a setting of 20% prevalence of PJP, the probability of PJP would be over 3-fold if the BALF-PCR test was positive, and the probability of PJP would be less than 0.5% if it was negative. The area under the SROC curve was 0.98 (0.97–0.99). Conclusions The method of PCR in BALF shows high sensitivity and good specificity for the diagnosis of PJP. However, clinical practice for the diagnosis of PJP should consider the consistent respiratory symptoms, radiographic changes and laboratory findings of the suspected patients. PMID:24023814
Li, Wei-Jie; Guo, Ya-Ling; Liu, Tang-Juan
2015-01-01
Background The (1-3)-β-D-Glucan (BG) assay has been approved for making a diagnosis of invasive fungal disease. However, the role of serum-BG assay for the diagnosis of pneumocystis pneumonia (PCP) is controversial, especially between patients with human immunodeficiency virus (HIV) and non-HIV. We conducted a meta-analysis to determine the difference of the overall accuracy of serum-BG assay for the diagnosis of PCP in immunocompromised patients with and without HIV. Methods After a systematic review of English-language studies and manual researching, sensitivity (Se), specificity (Sp), and other measures of accuracy of serum-BG for the diagnosis of PCP were pooled using random-effects models for bivariate meta-analysis. Summary receiver operating characteristic (SROC) curve was used to summarize overall test performance. Subgroup analyses were performed to explore the heterogeneity in Se and Sp. Results Thirteen studies met our inclusion criteria. The summary estimates for serum-BG assay for definite PCP were as follows: Se, 0.91 [95% confidence interval (CI), 0.88–0.93]; Sp, 0.75 (95% CI, 0.68–0.81). As for the patients with and without HIV, the Se and Sp were 0.92 and 0.78, 0.85 and 0.73, respectively. Significant heterogeneity between Se was presented (P=0.04). Conclusions Contrary to the results of the previous meta-analysis, a negative result of serum-BG determination is sufficient for ruling out PCP only in HIV cases. For non-HIV patients, the results should be interpreted in parallel with clinical and radiological findings. Besides, further prospective studies with larger sample size are needed to confirm the diagnosis strategy of BG detection. PMID:26793343
ERIC Educational Resources Information Center
Perepiczka, Michelle; Chandler, Nichelle; Becerra, Michael
2011-01-01
Statistics plays an integral role in graduate programs. However, numerous intra- and interpersonal factors may lead to successful completion of needed coursework in this area. The authors examined the extent of the relationship between self-efficacy to learn statistics and statistics anxiety, attitude towards statistics, and social support of 166…
[Comment on] Statistical discrimination
NASA Astrophysics Data System (ADS)
Chinn, Douglas
In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.
International petroleum statistics report
1997-05-01
The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report is published for the use of Members of Congress, Federal agencies, State agencies, industry, and the general public. Publication of this report is in keeping with responsibilities given the Energy Information Administration in Public Law 95-91. The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.
Statistical clumped isotope signatures
Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.
2016-01-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168
Statistical clumped isotope signatures.
Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G
2016-01-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168
Fragile entanglement statistics
NASA Astrophysics Data System (ADS)
Brody, Dorje C.; Hughston, Lane P.; Meier, David M.
2015-10-01
If X and Y are independent, Y and Z are independent, and so are X and Z, one might be tempted to conclude that X, Y, and Z are independent. But it has long been known in classical probability theory that, intuitive as it may seem, this is not true in general. In quantum mechanics one can ask whether analogous statistics can emerge for configurations of particles in certain types of entangled states. The explicit construction of such states, along with the specification of suitable sets of observables that have the purported statistical properties, is not entirely straightforward. We show that an example of such a configuration arises in the case of an N-particle GHZ state, and we are able to identify a family of observables with the property that the associated measurement outcomes are independent for any choice of 2,3,\\ldots ,N-1 of the particles, even though the measurement outcomes for all N particles are not independent. Although such states are highly entangled, the entanglement turns out to be ‘fragile’, i.e. the associated density matrix has the property that if one traces out the freedom associated with even a single particle, the resulting reduced density matrix is separable.
Bivariate measure of redundant information.
Harder, Malte; Salge, Christoph; Polani, Daniel
2013-01-01
We define a measure of redundant information based on projections in the space of probability distributions. Redundant information between random variables is information that is shared between those variables. But, in contrast to mutual information, redundant information denotes information that is shared about the outcome of a third variable. Formalizing this concept, and being able to measure it, is required for the non-negative decomposition of mutual information into redundant and synergistic information. Previous attempts to formalize redundant or synergistic information struggle to capture some desired properties. We introduce a new formalism for redundant information and prove that it satisfies all the properties necessary outlined in earlier work, as well as an additional criterion that we propose to be necessary to capture redundancy. We also demonstrate the behavior of this new measure for several examples, compare it to previous measures, and apply it to the decomposition of transfer entropy. PMID:23410306
Statistical Literacy: Developing a Youth and Adult Education Statistical Project
ERIC Educational Resources Information Center
Conti, Keli Cristina; Lucchesi de Carvalho, Dione
2014-01-01
This article focuses on the notion of literacy--general and statistical--in the analysis of data from a fieldwork research project carried out as part of a master's degree that investigated the teaching and learning of statistics in adult education mathematics classes. We describe the statistical context of the project that involved the…
Understanding Statistics and Statistics Education: A Chinese Perspective
ERIC Educational Resources Information Center
Shi, Ning-Zhong; He, Xuming; Tao, Jian
2009-01-01
In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…
Statistics Anxiety and Business Statistics: The International Student
ERIC Educational Resources Information Center
Bell, James A.
2008-01-01
Does the international student suffer from statistics anxiety? To investigate this, the Statistics Anxiety Rating Scale (STARS) was administered to sixty-six beginning statistics students, including twelve international students and fifty-four domestic students. Due to the small number of international students, nonparametric methods were used to…
Wide Wide World of Statistics: International Statistics on the Internet.
ERIC Educational Resources Information Center
Foudy, Geraldine
2000-01-01
Explains how to find statistics on the Internet, especially international statistics. Discusses advantages over print sources, including convenience, currency of information, cost effectiveness, and value-added formatting; sources of international statistics; United Nations agencies; search engines and power searching; and evaluating sources. (LRW)
Improving extreme value statistics.
Shekhawat, Ashivni
2014-11-01
The rate of convergence in extreme value statistics is nonuniversal and can be arbitrarily slow. Further, the relative error can be unbounded in the tail of the approximation, leading to difficulty in extrapolating the extreme value fit beyond the available data. We introduce the T method, and show that by using simple nonlinear transformations the extreme value approximation can be rendered rapidly convergent in the bulk, and asymptotic in the tail, thus fixing both issues. The transformations are often parametrized by just one parameter, which can be estimated numerically. The classical extreme value method is shown to be a special case of the proposed method. We demonstrate that vastly improved results can be obtained with almost no extra cost. PMID:25493780
NASA Astrophysics Data System (ADS)
de Gouvêa, André; Murayama, Hitoshi
2003-10-01
“Anarchy” is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on |Ue3|2, the remaining unknown “angle” of the leptonic mixing matrix.
Statistical physics ""Beyond equilibrium
Ecke, Robert E
2009-01-01
The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.
NASA Astrophysics Data System (ADS)
Baranger, Michel
2002-03-01
It is a remarkable fact that the traditional teaching of thermodynamics, as reflected in the textbooks and including the long developments about ensembles and thermodynamic functions, is almost entirely about systems in equilibrium. The time variable does not enter. There is one exception, however. The single most important item, the flagship of the thermodynamic navy, the second law, is about the irreversibility of the time evolution of systems out of equilibrium. This is a bizarre situation, to say the least; a glaring case of the drunk man looking for his key under the lamp-post, when he knows that he lost it in the dark part of the street. The moment has come for us to go looking in the dark part, the behavior of systems as a function of time. We have been given a powerful new flashlight, chaos theory. We should use it. There, on the formerly dark pavement, we can find Tsallis statistics.
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter
2005-07-01
The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.
Fast approximate motif statistics.
Nicodème, P
2001-01-01
We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175
Statistical design controversy
Evans, L.S.; Hendrey, G.R.; Thompson, K.H.
1985-02-01
This article was in response to criticisms received by Evans, Hendrey, and Thompson that their article was biased because of omissions and misrepresentations. The authors contend that experimental designs having only one plot per treatment ''were, from the outset, not capable of differentiating between treatment effects and field-position effects,'' remains valid and is supported by decades of agronomic research. Several men, Irving, Troiano, and McCune thought of the article as a review of all studies of acidic rain effects on soybeans. It was not. The article was written over the concern of the comparisons which were being made among studies which purport to evaluate effects of acid deposition on field-grown crops, and implicitly assumes that all of the studies are of equal scientific value. They are not. Only experimental approaches that are well-focused and designed with appropriate agronomic and statistical procedures should be used for credible regional and national assessments of crop inventories. 12 references.
Statistical Thermodynamics of Biomembranes
Devireddy, Ram V.
2010-01-01
An overview of the major issues involved in the statistical thermodynamic treatment of phospholipid membranes at the atomistic level is summarized: thermodynamic ensembles, initial configuration (or the physical system being modeled), force field representation as well as the representation of long-range interactions. This is followed by a description of the various ways that the simulated ensembles can be analyzed: area of the lipid, mass density profiles, radial distribution functions (RDFs), water orientation profile, Deuteurium order parameter, free energy profiles and void (pore) formation; with particular focus on the results obtained from our recent molecular dynamic (MD) simulations of phospholipids interacting with dimethylsulfoxide (Me2SO), a commonly used cryoprotective agent (CPA). PMID:19460363
Dienes, J.K.
1983-01-01
An alternative to the use of plasticity theory to characterize the inelastic behavior of solids is to represent the flaws by statistical methods. We have taken such an approach to study fragmentation because it offers a number of advantages. Foremost among these is that, by considering the effects of flaws, it becomes possible to address the underlying physics directly. For example, we have been able to explain why rocks exhibit large strain-rate effects (a consequence of the finite growth rate of cracks), why a spherical explosive imbedded in oil shale produces a cavity with a nearly square section (opening of bedding cracks) and why propellants may detonate following low-speed impact (a consequence of frictional hot spots).
HPV-Associated Cancers Statistics
... What CDC Is Doing Related Links Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Vaginal and Vulvar Cancer Home HPV-Associated Cancer Statistics Language: English Español (Spanish) Recommend on Facebook Tweet ...
Key Statistics for Thyroid Cancer
... cancer? Next Topic Thyroid cancer risk factors Key statistics for thyroid cancer How common is thyroid cancer? ... remains very low compared with most other cancers. Statistics on survival rates for thyroid cancer are discussed ...
Muscular Dystrophy: Data and Statistics
... Statistics Recommend on Facebook Tweet Share Compartir MD STAR net Data and Statistics The following data and ... research [ Read Article ] For more information on MD STAR net see Research and Tracking . Key Findings Feature ...
Heart Disease and Stroke Statistics
... Nutrition (PDF) Obesity (PDF) Peripheral Artery Disease (PDF) ... statistics, please contact the American Heart Association National Center, Office of Science & Medicine at statistics@heart.org . Please direct all ...
Statistics Anxiety and Instructor Immediacy
ERIC Educational Resources Information Center
Williams, Amanda S.
2010-01-01
The purpose of this study was to investigate the relationship between instructor immediacy and statistics anxiety. It was predicted that students receiving immediacy would report lower levels of statistics anxiety. Using a pretest-posttest-control group design, immediacy was measured using the Instructor Immediacy scale. Statistics anxiety was…
Statistics: It's in the Numbers!
ERIC Educational Resources Information Center
Deal, Mary M.; Deal, Walter F., III
2007-01-01
Mathematics and statistics play important roles in peoples' lives today. A day hardly passes that they are not bombarded with many different kinds of statistics. As consumers they see statistical information as they surf the web, watch television, listen to their satellite radios, or even read the nutrition facts panel on a cereal box in the…
Statistical log analysis made practical
Mitchell, W.K.; Nelson, R.J. )
1991-06-01
This paper discusses the advantages of a statistical approach to log analysis. Statistical techniques use inverse methods to calculate formation parameters. The use of statistical techniques has been limited, however, by the complexity of the mathematics and lengthy computer time required to minimize traditionally used nonlinear equations.
Invention Activities Support Statistical Reasoning
ERIC Educational Resources Information Center
Smith, Carmen Petrick; Kenlan, Kris
2016-01-01
Students' experiences with statistics and data analysis in middle school are often limited to little more than making and interpreting graphs. Although students may develop fluency in statistical procedures and vocabulary, they frequently lack the skills necessary to apply statistical reasoning in situations other than clear-cut textbook examples.…
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
Explorations in Statistics: the Bootstrap
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fourth installment of Explorations in Statistics explores the bootstrap. The bootstrap gives us an empirical approach to estimate the theoretical variability among possible values of a sample statistic such as the…
Representative Ensembles in Statistical Mechanics
NASA Astrophysics Data System (ADS)
Yukalov, V. I.
The notion of representative statistical ensembles, correctly representing statistical systems, is strictly formulated. This notion allows for a proper description of statistical systems, avoiding inconsistencies in theory. As an illustration, a Bose-condensed system is considered. It is shown that a self-consistent treatment of the latter, using a representative ensemble, always yields a conserving and gapless theory.
Use of Statistics by Librarians.
ERIC Educational Resources Information Center
Christensen, John O.
1988-01-01
Description of common errors found in the statistical methodologies of research carried out by librarians, focuses on sampling and generalizability. The discussion covers the need to either adapt library research to the statistical abilities of librarians or to educate librarians in the proper use of statistics. (15 references) (CLB)
Mclaren, Christine E.; Gordeuk, Victor R.; Chen, Wen-Pin; Barton, James C.; Acton, Ronald T.; Speechley, Mark; Castro, Oswaldo; Adams, Paul C.; Snively, Beverly M.; Harris, Emily L.; Reboussin, David M.; Mclachlan, Geoffrey J.; Bean, Richard
2013-01-01
Bivariate mixture modeling was used to analyze joint population distributions of transferrin saturation (TS) and serum ferritin concentration (SF) measured in the Hemochromatosis and Iron Overload Screening (HEIRS) Study. Four components (C1, C2, C3, and C4) with successively age-adjusted increasing means for TS and SF were identified in data from 26,832 African Americans, 12,620 Asians, 12,264 Hispanics, and 43,254 whites. The largest component, C2, had normal mean TS (21% to 26% for women, 29% to 30% for men) and SF (43–82 μg/L for women, 165–242 μg/L for men), which consisted of component proportions greater than 0.59 for women and greater than 0.68 for men. C3 and C4 had progressively greater mean values for TS and SF with progressively lesser component proportions. C1 had mean TS values less than 16% for women (<20% for men) and SF values less than 28 μg/L for women (<47 μg/L for men). Compared with C2, adjusted odds of iron deficiency were significantly greater in C1 (14.9–47.5 for women, 60.6–3530 for men), adjusted odds of liver disease were significantly greater in C3 and C4 for African-American women and all men, and adjusted odds of any HFE mutation were increased in C3 (1.4–1.8 for women, 1.2–1.9 for men) and in C4 for Hispanic and white women (1.5 and 5.2, respectively) and men (2.8 and 4.7, respectively). Joint mixture modeling identifies a component with lesser SF and TS at risk for iron deficiency and 2 components with greater SF and TS at risk for liver disease or HFE mutations. This approach can identify populations in which hereditary or acquired factors influence metabolism measurement. PMID:18201677
He, Wen-Jie; Li, Wen-Hui; Jiang, Bo; Wang, Yu-Feng; Xia, Yao-Xiong; Wang, Li
2015-01-01
Accumulating studies suggested that microRNAs (miRNAs) can have high diagnostic value as a non-invasive and cost-effective procedure with high sensitivity and specificity in the detection of early-stage lung cancer. However, there is inconsistency observed in the results of relevant studies. Therefore, we performed this meta-analysis to evaluate diagnostic value of miRNAs based on all related studies. A total of 38 studies from 13 included articles were used for the analysis, consisting of 510 patients and 465 healthy controls. All analyses were performed on the R 3.2.0 software. The bivariate random-effects meta-analysis model was applied to obtain the following pooled parameters: sensitivity, 0.797 (95% CI: 0.756-0.832); false positive rate, 0.296 (95% CI: 0.250-0.346); and AUC, 0.818. In addition, subgroup analyses were conducted, showing not only that a combination of multiple miRNAs as biomarkers have greater diagnostic value for early-stage lung cancer (sensitivity, false positive rate and AUC of 83%, 25.2% and 0.858, respectively) had a higher diagnostic accuracy than single miRNA (sensitivity, false positive rate and AUC of 78.3%, 31.6% and 0.799, respectively), but also that specimen from circulating system (sensitivity, false positive rate and AUC of 82.5%, 30.5% and 0.836, respectively) provide better biomarkers than specimen from non-circulating system (sensitivity, false positive rate and AUC of 73.8%, 26.5% and 0.796, respectively). In summary, the current meta-analysis suggests that miRNAs as biomarkers, particularly a combination of multiple tumor-specific miRNAs from circulating system, have moderately high clinical diagnostic value in the detection of early-stage lung cancer. However, the clinical diagnostic utilization and additional improvements of miRNAs as biomarkers for early-stage lung cancer detection still remain to be further validated by more future studies. PMID:26550141
Topics in statistical mechanics
Elser, V.
1984-05-01
This thesis deals with four independent topics in statistical mechanics: (1) the dimer problem is solved exactly for a hexagonal lattice with general boundary using a known generating function from the theory of partitions. It is shown that the leading term in the entropy depends on the shape of the boundary; (2) continuum models of percolation and self-avoiding walks are introduced with the property that their series expansions are sums over linear graphs with intrinsic combinatorial weights and explicit dimension dependence; (3) a constrained SOS model is used to describe the edge of a simple cubic crystal. Low and high temperature results are derived as well as the detailed behavior near the crystal facet; (4) the microscopic model of the lambda-transition involving atomic permutation cycles is reexamined. In particular, a new derivation of the two-component field theory model of the critical behavior is presented. Results for a lattice model originally proposed by Kikuchi are extended with a high temperature series expansion and Monte Carlo simulation. 30 references.
International petroleum statistics report
1996-05-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1084 through 1994.
Statistical properties of exoplanets
NASA Astrophysics Data System (ADS)
Udry, Stéphane
Since the detection a decade ago of the planetary companion of 51 Peg, more than 165 extrasolar planets have been unveiled by radial-velocity measurements. They present a wide variety of characteristics such as large masses with small orbital separations, high eccentricities, period resonances in multi-planet systems, etc. Meaningful features of the statistical distributions of the orbital parameters or parent stellar properties have emerged. We discuss them in the context of the constraints they provide for planet-formation models and in comparison to Neptune-mass planets in short-period orbits recently detected by radial-velocity surveys, thanks to new instrumental developments and adequate observing strategy. We expect continued improvement in velocity precision and anticipate the detection of Neptune-mass planets in longer-period orbits and even lower-mass planets in short-period orbits, giving us new information on the mass distribution function of exoplanets. Finally, the role of radial-velocity follow-up measurements of transit candidates is emphasized.
International petroleum statistics report
1995-07-27
The International Petroleum Statistics Report presents data on international oil production, demand, imports, and exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.
International petroleum statistics report
1997-07-01
The International Petroleum Statistics Report is a monthly publication that provides current international data. The report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent 12 months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1996; OECD stocks from 1973 through 1996; and OECD trade from 1986 through 1996.
International petroleum statistics report
1995-11-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.
International petroleum statistics report
1996-10-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. Word oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.
Statistical Mechanics of Zooplankton
Hinow, Peter; Nihongi, Ai; Strickler, J. Rudi
2015-01-01
Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar “microscopic” quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the “ecological temperature” of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean’s swimming behavior. PMID:26270537
Statistical mechanics of nucleosomes
NASA Astrophysics Data System (ADS)
Chereji, Razvan V.
Eukaryotic cells contain long DNA molecules (about two meters for a human cell) which are tightly packed inside the micrometric nuclei. Nucleosomes are the basic packaging unit of the DNA which allows this millionfold compactification. A longstanding puzzle is to understand the principles which allow cells to both organize their genomes into chromatin fibers in the crowded space of their nuclei, and also to keep the DNA accessible to many factors and enzymes. With the nucleosomes covering about three quarters of the DNA, their positions are essential because these influence which genes can be regulated by the transcription factors and which cannot. We study physical models which predict the genome-wide organization of the nucleosomes and also the relevant energies which dictate this organization. In the last five years, the study of chromatin knew many important advances. In particular, in the field of nucleosome positioning, new techniques of identifying nucleosomes and the competing DNA-binding factors appeared, as chemical mapping with hydroxyl radicals, ChIP-exo, among others, the resolution of the nucleosome maps increased by using paired-end sequencing, and the price of sequencing an entire genome decreased. We present a rigorous statistical mechanics model which is able to explain the recent experimental results by taking into account nucleosome unwrapping, competition between different DNA-binding proteins, and both the interaction between histones and DNA, and between neighboring histones. We show a series of predictions of our new model, all in agreement with the experimental observations.
Statistical Mechanics of Zooplankton.
Hinow, Peter; Nihongi, Ai; Strickler, J Rudi
2015-01-01
Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar "microscopic" quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the "ecological temperature" of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean's swimming behavior. PMID:26270537
Isotonic And Isokinetic Exercise During Bed Rest
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Wade, C. E.; Bernauer, E. M.; Trowbridge, T. S.; Ertl, A. C.
1993-01-01
Brief, intense activity prevents deterioration of peak oxygen uptake, a measure of work capacity. Study intended to explore effectiveness of exercise in maintaining fitness during long missions in microgravity so crewmembers able to keep up arduous work of building and expanding Space Station. Showed that intermittent, intense exercise of short duration more effective than similar exercise at lower intensity for longer times measured in previous studies. Intense short-term exercise seems to maintain volumes of plasma and red blood cells at normal levels.
Basic statistics in cell biology.
Vaux, David L
2014-01-01
The physicist Ernest Rutherford said, "If your experiment needs statistics, you ought to have done a better experiment." Although this aphorism remains true for much of today's research in cell biology, a basic understanding of statistics can be useful to cell biologists to help in monitoring the conduct of their experiments, in interpreting the results, in presenting them in publications, and when critically evaluating research by others. However, training in statistics is often focused on the sophisticated needs of clinical researchers, psychologists, and epidemiologists, whose conclusions depend wholly on statistics, rather than the practical needs of cell biologists, whose experiments often provide evidence that is not statistical in nature. This review describes some of the basic statistical principles that may be of use to experimental biologists, but it does not cover the sophisticated statistics needed for papers that contain evidence of no other kind. PMID:25000992
NASA Astrophysics Data System (ADS)
Holmes, Jon L.
2000-06-01
IP-number access. Current subscriptions can be upgraded to IP-number access at little additional cost. We are pleased to be able to offer to institutions and libraries this convenient mode of access to subscriber only resources at JCE Online. JCE Online Usage Statistics We are continually amazed by the activity at JCE Online. So far, the year 2000 has shown a marked increase. Given the phenomenal overall growth of the Internet, perhaps our surprise is not warranted. However, during the months of January and February 2000, over 38,000 visitors requested over 275,000 pages. This is a monthly increase of over 33% from the October-December 1999 levels. It is good to know that people are visiting, but we would very much like to know what you would most like to see at JCE Online. Please send your suggestions to JCEOnline@chem.wisc.edu. For those who are interested, JCE Online year-to-date statistics are available. Biographical Snapshots of Famous Chemists: Mission Statement Feature Editor: Barbara Burke Chemistry Department, California State Polytechnic University-Pomona, Pomona, CA 91768 phone: 909/869-3664 fax: 909/869-4616 email: baburke@csupomona.edu The primary goal of this JCE Internet column is to provide information about chemists who have made important contributions to chemistry. For each chemist, there is a short biographical "snapshot" that provides basic information about the person's chemical work, gender, ethnicity, and cultural background. Each snapshot includes links to related websites and to a biobibliographic database. The database provides references for the individual and can be searched through key words listed at the end of each snapshot. All students, not just science majors, need to understand science as it really is: an exciting, challenging, human, and creative way of learning about our natural world. Investigating the life experiences of chemists can provide a means for students to gain a more realistic view of chemistry. In addition students
Petroleum statistics in France
De Saint Germain, H.; Lamiraux, C.
1995-08-01
33 oil companies, including Elf, Exxon, Agip, Conoco as well as Coparex, Enron, Hadson, Midland, Hunt, Canyon and Union Texas are present in oil and gas exploration and production in France. The production of oil and gas in France amounts to some 60,000 bopd of oil and 350 MMcfpd of marketed natural gas each year, which still accounts for 3.5% and 10% for French domestic needs, respectively. To date, 166 fields have been discovered, representing a total reserve of 3 billion bbl of crude oil and 13 trillion cf of raw gas. These fields are concentrated in two major onshore sedimentary basins of Mesozoic age, which are the Aquitaine basin and the Paris basin. The Aquitaine basin should be subdivided into two distinct domains: The Parentis basin where the largest field Parentis was discovered in 1954 with still production of about 3700 bopd of oil and where Les Arbouslers field, discovered at the end of 1991, is currently producing about 10,000 bopd of oil. The northern Pyrenees and their foreland, where the Lacq field, discovered in 1951, has produced about 7.7 tcf of gas since 1957, and is still producing 138 MMcfpd. In the Paris basin, the two large oil fields are Villeperclue discovered in 1982 by Triton and Total, and Chaunoy, discovered in 1983 by Essorep, which are still producing about 10,000 and 15,000 bopd, respectively. The last significantly sized discovery occurred in 1990 with Itteville by Elf Aquitaine which is currently producing 4,200 bopd. The poster shows statistical data related to the past 20 years of oil and gas exploration and production in France.
Thermodynamics of cellular statistical inference
NASA Astrophysics Data System (ADS)
Lang, Alex; Fisher, Charles; Mehta, Pankaj
2014-03-01
Successful organisms must be capable of accurately sensing the surrounding environment in order to locate nutrients and evade toxins or predators. However, single cell organisms face a multitude of limitations on their accuracy of sensing. Berg and Purcell first examined the canonical example of statistical limitations to cellular learning of a diffusing chemical and established a fundamental limit to statistical accuracy. Recent work has shown that the Berg and Purcell learning limit can be exceeded using Maximum Likelihood Estimation. Here, we recast the cellular sensing problem as a statistical inference problem and discuss the relationship between the efficiency of an estimator and its thermodynamic properties. We explicitly model a single non-equilibrium receptor and examine the constraints on statistical inference imposed by noisy biochemical networks. Our work shows that cells must balance sample number, specificity, and energy consumption when performing statistical inference. These tradeoffs place significant constraints on the practical implementation of statistical estimators in a cell.
SOCR: Statistics Online Computational Resource
Dinov, Ivo D.
2011-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741
Statistics without Tears: Complex Statistics with Simple Arithmetic
ERIC Educational Resources Information Center
Smith, Brian
2011-01-01
One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…
GIS application on spatial landslide analysis using statistical based models
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred F.
2009-09-01
This paper presents the assessment results of spatially based probabilistic three models using Geoinformation Techniques (GIT) for landslide susceptibility analysis at Penang Island in Malaysia. Landslide locations within the study areas were identified by interpreting aerial photographs, satellite images and supported with field surveys. Maps of the topography, soil type, lineaments and land cover were constructed from the spatial data sets. There are ten landslide related factors were extracted from the spatial database and the frequency ratio, fuzzy logic, and bivariate logistic regression coefficients of each factor was computed. Finally, landslide susceptibility maps were drawn for study area using frequency ratios, fuzzy logic and bivariate logistic regression models. For verification, the results of the analyses were compared with actual landslide locations in study area. The verification results show that bivariate logistic regression model provides slightly higher prediction accuracy than the frequency ratio and fuzzy logic models.
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
Digest of Education Statistics, 2000.
ERIC Educational Resources Information Center
Snyder, Thomas D.; Hoffman, Charlene M.
This edition of the "Digest of Education Statistics" is the 36th in a series that provides a compilation of statistical information covering the broad field of U.S. education from kindergarten through graduate school. The Digest includes data from many sources, both government and private, and draws heavily on work done by the National Center for…
Education Statistics Quarterly, Fall 2000.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2000-01-01
The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released during a 3-month period. Each message also contains a message from…
Explorations in Statistics: Permutation Methods
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2012-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eighth installment of "Explorations in Statistics" explores permutation methods, empiric procedures we can use to assess an experimental result--to test a null hypothesis--when we are reluctant to trust statistical…
Education Statistics Quarterly, Spring 2001.
ERIC Educational Resources Information Center
Education Statistics Quarterly, 2001
2001-01-01
The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products and funding opportunities developed over a 3-month period. Each issue also…
SOCR: Statistics Online Computational Resource
ERIC Educational Resources Information Center
Dinov, Ivo D.
2006-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an…
Representational Versatility in Learning Statistics
ERIC Educational Resources Information Center
Graham, Alan T.; Thomas, Michael O. J.
2005-01-01
Statistical data can be represented in a number of qualitatively different ways, the choice depending on the following three conditions: the concepts to be investigated; the nature of the data; and the purpose for which they were collected. This paper begins by setting out frameworks that describe the nature of statistical thinking in schools, and…
Statistics Anxiety among Postgraduate Students
ERIC Educational Resources Information Center
Koh, Denise; Zawi, Mohd Khairi
2014-01-01
Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…
Students' Attitudes toward Statistics (STATS).
ERIC Educational Resources Information Center
Sutarso, Toto
The purposes of this study were to develop an instrument to measure students' attitude toward statistics (STATS), and to define the underlying dimensions that comprise the STATS. The instrument consists of 24 items. The sample included 79 male and 97 female students from the statistics classes at the College of Education and the College of…
Motivating Play Using Statistical Reasoning
ERIC Educational Resources Information Center
Cross Francis, Dionne I.; Hudson, Rick A.; Lee, Mi Yeon; Rapacki, Lauren; Vesperman, Crystal Marie
2014-01-01
Statistical literacy is essential in everyone's personal lives as consumers, citizens, and professionals. To make informed life and professional decisions, students are required to read, understand, and interpret vast amounts of information, much of which is quantitative. To develop statistical literacy so students are able to make sense of…
Explorations in Statistics: Confidence Intervals
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…
Statistical Factors in Complexation Reactions.
ERIC Educational Resources Information Center
Chung, Chung-Sun
1985-01-01
Four cases which illustrate statistical factors in complexation reactions (where two of the reactants are monodentate ligands) are presented. Included are tables showing statistical factors for the reactions of: (1) square-planar complexes; (2) tetrahedral complexes; and (3) octahedral complexes. (JN)
Statistical Methods in Psychology Journals.
ERIC Educational Resources Information Center
Willkinson, Leland
1999-01-01
Proposes guidelines for revising the American Psychological Association (APA) publication manual or other APA materials to clarify the application of statistics in research reports. The guidelines are intended to induce authors and editors to recognize the thoughtless application of statistical methods. Contains 54 references. (SLD)
Students' attitudes towards learning statistics
NASA Astrophysics Data System (ADS)
Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah
2015-05-01
Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.
Probability, Information and Statistical Physics
NASA Astrophysics Data System (ADS)
Kuzemsky, A. L.
2016-03-01
In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.
Optimizing the prediction process: from statistical concepts to the case study of soccer.
Heuer, Andreas; Rubner, Oliver
2014-01-01
We present a systematic approach for prediction purposes based on panel data, involving information about different interacting subjects and different times (here: two). The corresponding bivariate regression problem can be solved analytically for the final statistical estimation error. Furthermore, this expression is simplified for the special case that the subjects do not change their properties between the last measurement and the prediction period. This statistical framework is applied to the prediction of soccer matches, based on information from the previous and the present season. It is determined how well the outcome of soccer matches can be predicted theoretically. This optimum limit is compared with the actual quality of the prediction, taking the German premier league as an example. As a key step for the actual prediction process one has to identify appropriate observables which reflect the strength of the individual teams as close as possible. A criterion to distinguish different observables is presented. Surprisingly, chances for goals turn out to be much better suited than the goals themselves to characterize the strength of a team. Routes towards further improvement of the prediction are indicated. Finally, two specific applications are discussed. PMID:25198501
Optimizing the Prediction Process: From Statistical Concepts to the Case Study of Soccer
Heuer, Andreas; Rubner, Oliver
2014-01-01
We present a systematic approach for prediction purposes based on panel data, involving information about different interacting subjects and different times (here: two). The corresponding bivariate regression problem can be solved analytically for the final statistical estimation error. Furthermore, this expression is simplified for the special case that the subjects do not change their properties between the last measurement and the prediction period. This statistical framework is applied to the prediction of soccer matches, based on information from the previous and the present season. It is determined how well the outcome of soccer matches can be predicted theoretically. This optimum limit is compared with the actual quality of the prediction, taking the German premier league as an example. As a key step for the actual prediction process one has to identify appropriate observables which reflect the strength of the individual teams as close as possible. A criterion to distinguish different observables is presented. Surprisingly, chances for goals turn out to be much better suited than the goals themselves to characterize the strength of a team. Routes towards further improvement of the prediction are indicated. Finally, two specific applications are discussed. PMID:25198501