Boysen, Angela K; Heal, Katherine R; Carlson, Laura T; Ingalls, Anitra E
2018-01-16
The goal of metabolomics is to measure the entire range of small organic molecules in biological samples. In liquid chromatography-mass spectrometry-based metabolomics, formidable analytical challenges remain in removing the nonbiological factors that affect chromatographic peak areas. These factors include sample matrix-induced ion suppression, chromatographic quality, and analytical drift. The combination of these factors is referred to as obscuring variation. Some metabolomics samples can exhibit intense obscuring variation due to matrix-induced ion suppression, rendering large amounts of data unreliable and difficult to interpret. Existing normalization techniques have limited applicability to these sample types. Here we present a data normalization method to minimize the effects of obscuring variation. We normalize peak areas using a batch-specific normalization process, which matches measured metabolites with isotope-labeled internal standards that behave similarly during the analysis. This method, called best-matched internal standard (B-MIS) normalization, can be applied to targeted or untargeted metabolomics data sets and yields relative concentrations. We evaluate and demonstrate the utility of B-MIS normalization using marine environmental samples and laboratory grown cultures of phytoplankton. In untargeted analyses, B-MIS normalization allowed for inclusion of mass features in downstream analyses that would have been considered unreliable without normalization due to obscuring variation. B-MIS normalization for targeted or untargeted metabolomics is freely available at https://github.com/IngallsLabUW/B-MIS-normalization .
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Walsh, Marianne C; Brennan, Lorraine; Malthouse, J Paul G; Roche, Helen M; Gibney, Michael J
2006-09-01
Metabolomics in human nutrition research is faced with the challenge that changes in metabolic profiles resulting from diet may be difficult to differentiate from normal physiologic variation. We assessed the extent of intra- and interindividual variation in normal human metabolic profiles and investigated the effect of standardizing diet on reducing variation. Urine, plasma, and saliva were collected from 30 healthy volunteers (23 females, 7 males) on 4 separate mornings. For visits 1 and 2, free food choice was permitted on the day before biofluid collection. Food choice on the day before visit 3 was intended to mimic that for visit 2, and all foods were standardized on the day before visit 4. Samples were analyzed by using 1H nuclear magnetic resonance spectroscopy followed by multivariate data analysis. Intra- and interindividual variations were considerable for each biofluid. Visual inspection of the principal components analysis scores plots indicated a reduction in interindividual variation in urine, but not in plasma or saliva, after the standard diet. Partial least-squares discriminant analysis indicated time-dependent changes in urinary and salivary samples, mainly resulting from creatinine in urine and acetate in saliva. The predictive power of each model to classify the samples as either night or morning was 85% for urine and 75% for saliva. Urine represented a sensitive metabolic profile that reflected acute dietary intake, whereas plasma and saliva did not. Future metabolomics studies should consider recent dietary intake and time of sample collection as a means of reducing normal physiologic variation.
16 CFR 1207.4 - Recommended standards for materials of manufacture.
Code of Federal Regulations, 2011 CFR
2011-01-01
... exposure to rain, snow, ice, sunlight, local, normal temperature extremes, local normal wind variations... be toxic to man or harmful to the environment under intended use and reasonably foreseeable abuse or...
16 CFR 1207.4 - Recommended standards for materials of manufacture.
Code of Federal Regulations, 2012 CFR
2012-01-01
... exposure to rain, snow, ice, sunlight, local, normal temperature extremes, local normal wind variations... be toxic to man or harmful to the environment under intended use and reasonably foreseeable abuse or...
16 CFR 1207.4 - Recommended standards for materials of manufacture.
Code of Federal Regulations, 2014 CFR
2014-01-01
... exposure to rain, snow, ice, sunlight, local, normal temperature extremes, local normal wind variations... be toxic to man or harmful to the environment under intended use and reasonably foreseeable abuse or...
16 CFR 1207.4 - Recommended standards for materials of manufacture.
Code of Federal Regulations, 2010 CFR
2010-01-01
... exposure to rain, snow, ice, sunlight, local, normal temperature extremes, local normal wind variations... be toxic to man or harmful to the environment under intended use and reasonably foreseeable abuse or...
16 CFR § 1207.4 - Recommended standards for materials of manufacture.
Code of Federal Regulations, 2013 CFR
2013-01-01
... exposure to rain, snow, ice, sunlight, local, normal temperature extremes, local normal wind variations... be toxic to man or harmful to the environment under intended use and reasonably foreseeable abuse or...
Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula; Münster, Anna-Marie B; Pedersen, Shona
2018-05-01
The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between-subject variation of CAT systematically. Moreover, we assess the application of an internal standard for normalization to diminish variation. 20 healthy volunteers donated one blood sample which was subsequently centrifuged, aliquoted and stored at -80 °C prior to analysis. The analytical variation was determined on eight runs, where plasma from the same seven volunteers was processed in triplicates, and for the between-subject variation, TG analysis was performed on plasma from all 20 volunteers. The trigger reagents used for the TG assays included both PPP reagent containing 5 pM tissue factor (TF) and PPPlow with 1 pM TF. Plasma, drawn from a single donor, was applied to all plates as an internal standard for each TG analysis, which subsequently was used for normalization. The total analytical variation for TG analysis performed with PPPlow reagent is 3-14% and 9-13% for PPP reagent. This variation can be minimally reduced by using an internal standard but mainly for ETP (endogenous thrombin potential). The between-subject variation is higher when using PPPlow than PPP and this variation is considerable higher than the analytical variation. TG has a rather high inherent analytical variation but considerable lower than the between-subject variation when using PPPlow as reagent.
Limpert, Eckhard; Stahel, Werner A.
2011-01-01
Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
Correction of Bowtie-Filter Normalization and Crescent Artifacts for a Clinical CBCT System.
Zhang, Hong; Kong, Vic; Huang, Ke; Jin, Jian-Yue
2017-02-01
To present our experiences in understanding and minimizing bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a clinical cone beam computed tomography system. Bowtie-filter position and profile variations during gantry rotation were studied. Two previously proposed strategies (A and B) were applied to the clinical cone beam computed tomography system to correct bowtie-filter crescent artifacts. Physical calibration and analytical approaches were used to minimize the norm phantom misalignment and to correct for bowtie-filter normalization artifacts. A combined procedure to reduce bowtie-filter crescent artifacts and bowtie-filter normalization artifacts was proposed and tested on a norm phantom, CatPhan, and a patient and evaluated using standard deviation of Hounsfield unit along a sampling line. The bowtie-filter exhibited not only a translational shift but also an amplitude variation in its projection profile during gantry rotation. Strategy B was better than strategy A slightly in minimizing bowtie-filter crescent artifacts, possibly because it corrected the amplitude variation, suggesting that the amplitude variation plays a role in bowtie-filter crescent artifacts. The physical calibration largely reduced the misalignment-induced bowtie-filter normalization artifacts, and the analytical approach further reduced bowtie-filter normalization artifacts. The combined procedure minimized both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts, with Hounsfield unit standard deviation being 63.2, 45.0, 35.0, and 18.8 Hounsfield unit for the best correction approaches of none, bowtie-filter crescent artifacts, bowtie-filter normalization artifacts, and bowtie-filter normalization artifacts + bowtie-filter crescent artifacts, respectively. The combined procedure also demonstrated reduction of bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a CatPhan and a patient. We have developed a step-by-step procedure that can be directly used in clinical cone beam computed tomography systems to minimize both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts.
Geochemical fingerprinting and source discrimination in soils at the continental scale
NASA Astrophysics Data System (ADS)
Negrel, Philippe; Sadeghi, Martiya; Ladenberger, Anna; Birke, Manfred; Reimann, Clemens
2014-05-01
Agricultural soil (Ap-horizon, 0-20 cm) samples were collected from a large part of Europe (33 countries, 5.6 million km2) at an average density of 1 sample site per 2500 km2. The resulting 2108 soil samples were air dried, sieved to <2 mm, milled and analysed for their major and trace element concentrations by wavelength dispersive X-ray fluorescence spectrometry (WD-XRF). The main goal of this study is to provide a global view of element mobility and source rocks at the continent scale, either by reference to crustal evolution or normalized patterns of element mobility during weathering processes. The survey area includes several sedimentary basins with different geological history, developed in different climate zones and landscapes and with different land use. In order to normalize the chemical composition of soils, mean values and standard deviation of the selected elements have been checked against values for the upper continental crust (UCC). Some elements turned out to be enriched relative to the UCC (Al, P, Zr, Pb) whereas others, like Mg, Na, Sr and Pb were depleted with regards to the variation represented by the standard deviation. The concept of UCC extended normalization patterns have been further used for the selected elements. The mean value of Rb, K, Y, Ti, Al, Si, Zr, Ce and Fe are very close to the UCC model even if standard deviation suggests slight enrichment or depletion, and Zr shows the best fit with the UCC model using both mean value and standard deviation. Lead and Cr are enriched in European soils when compared to UCC but their standard deviation values show very large variations, particularly towards very low values, which can be interpreted as a lithological effect. Element variability has been explored by looking at the variations using indicator elements. Soil data have been converted into Al-normalized enrichment factors and Na was applied as normalizing element for studying provenance source taking into account the main lithologies of the UCC. This latter normalization highlighted variations related to the soluble and insoluble behavior of some elements (K, Rb versus Ti, Al, Si, V, Y, Zr, Ba, and La, respectively), their reactivity (Fe, Mn, Zn), association with carbonates (Ca and Sr) and with phosphates (P and Ce). The maps of normalized composition revealed some problems with use of classical element ratios due to genetical differences in composition of parent material reflected, for example, in large differences in titanium content in bedrock and soil throughout the Europe.
A CLIMATOLOGY OF TEMPERATURE AND PRECIPITATION VARIABILITY IN THE UNITED STATES
This paper examines the seasonal and variance and standardized range for temperature and the seasonal end annual coefficient of variation and normalized standardized range for precipitation, on a climatic division level for the contiguous United States for the period 1895 to 1985...
Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T
2016-05-15
Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
A history of normal plates, tables and stages in vertebrate embryology
HOPWOOD, NICK
2006-01-01
Developmental biology is today unimaginable without the normal stages that define standard divisions of development. This history of normal stages, and the related normal plates and normal tables, shows how these standards have shaped and been shaped by disciplinary change in vertebrate embryology. The article highlights the Normal Plates of the Development of the Vertebrates edited by the German anatomist Franz Keibel (16 volumes, 1897–1938). These were a major response to problems in the relations between ontogeny and phylogeny that amounted in practical terms to a crisis in staging embryos, not just between, but (for some) also within species. Keibel’s design adapted a plate by Wilhelm His and tables by Albert Oppel in order to go beyond the already controversial comparative plates of the Darwinist propagandist Ernst Haeckel. The project responded to local pressures, including intense concern with individual variation, but recruited internationally and mapped an embryological empire. Though theoretically inconclusive, the plates became standard laboratory tools and forged a network within which the Institut International d’Embryologie (today the International Society of Developmental Biologists) was founded in 1911. After World War I, experimentalists, led by Ross Harrison and Viktor Hamburger, and human embryologists, especially George Streeter at the Carnegie Department of Embryology, transformed Keibel’s complex, bulky tomes to suit their own contrasting demands. In developmental biology after World War II, normal stages—reduced to a few journal pages—helped domesticate model organisms. Staging systems had emerged from discussions that questioned the very possibility of assigning an embryo to a stage. The historical issues resonate today as developmental biologists work to improve and extend stage series, to make results from different laboratories easier to compare and to take individual variation into account. PMID:17183461
A history of normal plates, tables and stages in vertebrate embryology.
Hopwood, Nick
2007-01-01
Developmental biology is today unimaginable without the normal stages that define standard divisions of development. This history of normal stages, and the related normal plates and normal tables, shows how these standards have shaped and been shaped by disciplinary change in vertebrate embryology. The article highlights the Normal Plates of the Development of the Vertebrates edited by the German anatomist Franz Keibel (16 volumes, 1897-1938). These were a major response to problems in the relations between ontogeny and phylogeny that amounted in practical terms to a crisis in staging embryos, not just between, but (for some) also within species. Keibel's design adapted a plate by Wilhelm His and tables by Albert Oppel in order to go beyond the already controversial comparative plates of the Darwinist propagandist Ernst Haeckel. The project responded to local pressures, including intense concern with individual variation, but recruited internationally and mapped an embryological empire. Though theoretically inconclusive, the plates became standard laboratory tools and forged a network within which the Institut International d'Embryologie (today the International Society of Developmental Biologists) was founded in 1911. After World War I, experimentalists, led by Ross Harrison and Viktor Hamburger, and human embryologists, especially George Streeter at the Carnegie Department of Embryology, transformed Keibel's complex, bulky tomes to suit their own contrasting demands. In developmental biology after World War II, normal stages-reduced to a few journal pages-helped domesticate model organisms. Staging systems had emerged from discussions that questioned the very possibility of assigning an embryo to a stage. The historical issues resonate today as developmental biologists work to improve and extend stage series, to make results from different laboratories easier to compare and to take individual variation into account.
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175
Normalizing and scaling of data to derive human response corridors from impact tests.
Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A
2014-06-03
It is well known that variability is inherent in any biological experiment. Human cadavers (Post-Mortem Human Subjects, PMHS) are routinely used to determine responses to impact loading for crashworthiness applications including civilian (motor vehicle) and military environments. It is important to transform measured variables from PMHS tests (accelerations, forces and deflections) to a standard or reference population, termed normalization. The transformation process should account for inter-specimen variations with some underlying assumptions used during normalization. Scaling is a process by which normalized responses are converted from one standard to another (example, mid-size adult male to large-male and small-size female adults, and to pediatric populations). These responses are used to derive corridors to assess the biofidelity of anthropomorphic test devices (crash dummies) used to predict injury in impact environments and design injury mitigating devices. This survey examines the pros and cons of different approaches for obtaining normalized and scaled responses and corridors used in biomechanical studies for over four decades. Specifically, the equal-stress equal-velocity and impulse-momentum methods along with their variations are discussed in this review. Methods ranging from subjective to quasi-static loading to different approaches are discussed for deriving temporal mean and plus minus one standard deviation human corridors of time-varying fundamental responses and cross variables (e.g., force-deflection). The survey offers some insights into the potential efficacy of these approaches with examples from recent impact tests and concludes with recommendations for future studies. The importance of considering various parameters during the experimental design of human impact tests is stressed. Published by Elsevier Ltd.
Shapes of strong shock fronts in an inhomogeneous solar wind
NASA Technical Reports Server (NTRS)
Heinemann, M. A.; Siscoe, G. L.
1974-01-01
The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Dees, Elise W; Baraas, Rigmor C
2014-04-01
Carriers of red-green color-vision deficiencies are generally thought to behave like normal trichromats, although it is known that they may make errors on Ishihara plates. The aim here was to compare the performance of carriers with that of normal females on seven standard color-vision tests, including Ishihara plates. One hundred and twenty-six normal females, 14 protan carriers, and 29 deutan carriers aged 9-66 years were included in the study. Generally, deutan carriers performed worse than protan carriers and normal females on six out of the seven tests. The difference in performance between carriers and normal females was independent of age, but the proportion of carriers that made errors on pseudo-isochromatic tests increased with age. It was the youngest carriers, however, who made the most errors. There was considerable variation in performance among individuals in each group of females. The results are discussed in relation to variability in the number of different L-cone pigments.
Wang, Jian; Huang, Ying; Zhang, Xue-Li; Huang, Xia; Xu, Xiao-Wen; Liang, Fan-Mei
2016-04-01
To study the skin prick test (SPT) reactivity to house dust mite allergens in overweight and normal weight children with allergic asthma before and after standard subcutaneous specific immunotherapy. Two hundred and fifteen children with allergic asthma who had positive SPT responses to Dermatophagoides pteronyssinus (DP) and Dermatophagoides farinae (DF) were enrolled. According to the weight index, they were classified into overweight (n=63) and normal weight groups (n=152). Skin indices (SI) to DP and DF were compared between the two groups at 6 months and 1 year after standard subcutaneous specific immunotherapy. The overweight group had a significantly larger histamine wheal diameter than the normal weight group after controlling the variation in testing time (P<0.05). After controlling the variation in weights, there were significant differences in the SIs to DP and DF before specific immunotherapy and at 6 months and 1 year after specific immunotherapy. At 6 months and 1 year after specific immunotherapy, the SIs to DP and DF were significantly reduced in both groups (P<0.05), and the overweight group had greater decreases in the SIs to DP and DF than the normal weight group. The overweight children with allergic asthma have stronger responses to histamine than the normal weight patients. Specific immunotherapy can reduce the reactivity to dust mite allergens in children with allergic asthma. Within one year after specific immunotherapy, the overweight children with allergic asthma have a significantly greater decrease in the reactivity to dust mite allergens than the normal weight patients.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Eriksen, Willy; Sundet, Jon M; Tambs, Kristian
2010-09-01
The authors aimed to determine the relation between birth-weight variations within the normal range and intelligence in young adulthood. A historical birth cohort study was conducted. Data from the Medical Birth Register of Norway were linked with register data from the Norwegian National Conscript Service. The sample comprised 52,408 sibships of full brothers who were born singletons at 37-41 completed weeks' gestation during 1967-1984 in Norway and were intelligence-tested at the time of mandatory military conscription. Generalized estimating equations were used to fit population-averaged panel data models. The analyses showed that in men with birth weights within the 10th-90th percentile range, a within-family difference of 1 standard deviation in birth weight standardized to gestational age was associated with a within-family difference of 0.07 standard deviation (99% confidence interval: 0.03, 0.09) in intelligence score, after adjustment for a range of background factors. There was no significant between-family association after adjustment for background factors. In Norwegian males, normal variations in intrauterine growth are associated with differences in intelligence in young adulthood. This association is probably not due to confounding by familial and parental characteristics.
Ku-band radar threshold analysis
NASA Technical Reports Server (NTRS)
Weber, C. L.; Polydoros, A.
1979-01-01
The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.
NASA Astrophysics Data System (ADS)
Jiang, Z.; Yang, S.; He, J.; Li, J.; Liang, J.
2008-08-01
The interdecadal variation of northward propagation of the East Asian Summer Monsoon (EASM) and summer precipitation in East China have been investigated using daily surface rainfall from a dense rain gauge network in China for 1957 2001, National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis, European Center for Medium-Range Weather Forecast (ECMWF) reanalysis, and Global Mean Sea Level Pressure Dataset (GMSLP2) from Climatic Research Unit (CRU). Results in general show a consistent agreement on the interdecadal variability of EASM northward propagations. However, it appears that the interdecadal variation is stronger in NCEP than in ECMWF and CRU datasets. A newly defined normalized precipitation index (NPI), a 5-day running mean rainfall normalized with its standard deviation, clearly depicts the characteristics of summer rainbelt activities in East China in terms of jumps and durations during its northward propagations. The EASM northward propagation shows a prominent interdecadal variation. EASM before late 1970s had a rapid northward advance and a northern edge beyond its normal position. As a result, more summer rainfall occurred for the North China rainy season, Huaihe-River Mei-Yu, and South China Mei-Yu. In contrast, EASM after late 1970s had a slow northward movement and a northern edge located south of its normal position. Less summer precipitation occurred in East China except in Yangtze River basin. The EASM northernmost position (ENP), northernmost intensity (ENI), and EASM have a complex and good relationship at interdecadal timescales. They have significant influences on interdecadal variation of the large-scale precipitation anomalies in East China.
The thoracic surface anatomy of adult black South Africans: A reappraisal from CT scans.
Keough, N; Mirjalili, S A; Suleman, F E; Lockhat, Z I; van Schoor, A
2016-11-01
Surface landmarks or planes taught in anatomy curricula derive from standard anatomical textbooks. Although many surface landmarks are valid, clear age, sex, and population differences exist. We reappraise the thoracic surface anatomy of black South Africans. We analyzed 76 (female = 42; male = 34) thoracoabdominal CT-scans. Patients were placed in a supine position with arms abducted. We analyzed the surface anatomy of the sternal angle, tracheal, and pulmonary trunk bifurcation, azygos vein termination, central veins, heart apex, diaphragm, xiphisternal joint, and subcostal plane using standardized definitions. Surface anatomy landmarks were mostly within the normal variation limits described in previous studies. Variation was observed where the esophagus (T9) and inferior vena cava (IVC) (T8/T9/T10) passed through the diaphragm. The bifurcations of the trachea and pulmonary trunk were inferior to the sternal angle. The subcostal plane level was positioned at L1/L2. The origin of inferior mesenteric artery was mostly inferior to the subcostal plane. Sex differences were noted for the plane of the xiphisternal joint (P = 0.0082), with males (36%) intersecting at T10 and females (36%) intersecting at T9. We provide further evidence for population variations in surface anatomy. The clinical relevance of surface anatomical landmarks depends on descriptions of normal variation. Accurate descriptions of population, sex, age, and body type differences are essential. Clin. Anat. 29:1018-1024, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Normalization of mass cytometry data with bead standards
Finck, Rachel; Simonds, Erin F.; Jager, Astraea; Krishnaswamy, Smita; Sachs, Karen; Fantl, Wendy; Pe’er, Dana; Nolan, Garry P.; Bendall, Sean C.
2013-01-01
Mass cytometry uses atomic mass spectrometry combined with isotopically pure reporter elements to currently measure as many as 40 parameters per single cell. As with any quantitative technology, there is a fundamental need for quality assurance and normalization protocols. In the case of mass cytometry, the signal variation over time due to changes in instrument performance combined with intervals between scheduled maintenance must be accounted for and then normalized. Here, samples were mixed with polystyrene beads embedded with metal lanthanides, allowing monitoring of mass cytometry instrument performance over multiple days of data acquisition. The protocol described here includes simultaneous measurements of beads and cells on the mass cytometer, subsequent extraction of the bead-based signature, and the application of an algorithm enabling correction of both short- and long-term signal fluctuations. The variation in the intensity of the beads that remains after normalization may also be used to determine data quality. Application of the algorithm to a one-month longitudinal analysis of a human peripheral blood sample reduced the range of median signal fluctuation from 4.9-fold to 1.3-fold. PMID:23512433
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
Variation in form of mandibular, light, round, preformed NiTi archwires.
Saze, Naomi; Arai, Kazuhito
2016-09-01
To evaluate the variation in form of nickel-titanium (NiTi) archwires by comparing them with the dental arch form of normal Japanese subjects before and after placing them in the first molar tubes. The mandibular dental casts of 30 normal subjects were scanned, and the dental arch depths and widths from the canine to the first molar were measured. Standardized images of 34 types of 0.016-inch preformed NiTi archwires were also taken in a 37°C environment, and the widths were measured and then classified by cluster analysis. Images of these archwires placed in a custom jig with brackets attached at the mean locations of the normal mandibular central incisors and first molar were additionally taken. The widths of the pooled and classified archwires were then compared with the normal dental arch widths before and after placement in the jig and among the groups (P < .05). The archwires were classified into three groups: small, medium, and large. The archwire widths in the small and medium groups were narrower than those at all examined tooth widths, except in the case of the premolars of the medium group. After placement in the jig, the pooled archwire widths were found to be significantly narrower and wider at the canine and second premolar, respectively, than at the dental arch, but not in the individual comparisons between groups. The variation observed in the mandibular NiTi archwire forms significantly decreased following fitting into the normal positions of the first molars.
Niskanen, Eini; Julkunen, Petro; Säisänen, Laura; Vanninen, Ritva; Karjalainen, Pasi; Könönen, Mervi
2010-08-01
Navigated transcranial magnetic stimulation (TMS) can be used to stimulate functional cortical areas at precise anatomical location to induce measurable responses. The stimulation has commonly been focused on anatomically predefined motor areas: TMS of that area elicits a measurable muscle response, the motor evoked potential. In clinical pathologies, however, the well-known homunculus somatotopy theory may not be straightforward, and the representation area of the muscle is not fixed. Traditionally, the anatomical locations of TMS stimulations have not been reported at the group level in standard space. This study describes a methodology for group-level analysis by investigating the normal representation areas of thenar and anterior tibial muscle in the primary motor cortex. The optimal representation area for these muscles was mapped in 59 healthy right-handed subjects using navigated TMS. The coordinates of the optimal stimulation sites were then normalized into standard space to determine the representation areas of these muscles at the group-level in healthy subjects. Furthermore, 95% confidence interval ellipsoids were fitted into the optimal stimulation site clusters to define the variation between subjects in optimal stimulation sites. The variation was found to be highest in the anteroposterior direction along the superior margin of the precentral gyrus. These results provide important normative information for clinical studies assessing changes in the functional cortical areas because of plasticity of the brain. Furthermore, it is proposed that the presented methodology to study TMS locations at the group level on standard space will be a suitable tool for research purposes in population studies. 2010 Wiley-Liss, Inc.
Cui, Xiquan; Ren, Jian; Tearney, Guillermo J.; Yang, Changhuei
2010-01-01
We report the implementation of an image sensor chip, termed wavefront image sensor chip (WIS), that can measure both intensity/amplitude and phase front variations of a light wave separately and quantitatively. By monitoring the tightly confined transmitted light spots through a circular aperture grid in a high Fresnel number regime, we can measure both intensity and phase front variations with a high sampling density (11 µm) and high sensitivity (the sensitivity of normalized phase gradient measurement is 0.1 mrad under the typical working condition). By using WIS in a standard microscope, we can collect both bright-field (transmitted light intensity) and normalized phase gradient images. Our experiments further demonstrate that the normalized phase gradient images of polystyrene microspheres, unstained and stained starfish embryos, and strongly birefringent potato starch granules are improved versions of their corresponding differential interference contrast (DIC) microscope images in that they are artifact-free and quantitative. Besides phase microscopy, WIS can benefit machine recognition, object ranging, and texture assessment for a variety of applications. PMID:20721059
Developmental Origins of Low Mathematics Performance and Normal Variation in Twins from 7 to 9 Years
Haworth, Claire M. A.; Kovas, Yulia; Petrill, Stephen A.; Plomin, Robert
2009-01-01
A previous publication reported the etiology of mathematics performance in 7-year-old twins (Oliver et al., 2004). As part of the same longitudinal study we investigated low mathematics performance and normal variation in a representative United Kingdom sample of 1713 same-sex 9-year-old twins based on teacher-assessed National Curriculum standards. Univariate individual differences and DeFries-Fulker extremes analyses were performed. Similar to our results at 7 years, all mathematics scores at 9 years showed high heritability (.62–.75) and low shared environmental estimates (.00–.11) for both the low performance group and the full sample. Longitudinal analyses were performed from 7 to 9 years. These longitudinal analyses indicated strong genetic continuity from 7 to 9 years for both low performance and mathematics in the normal range. We conclude that, despite the considerable differences in mathematics curricula from 7 to 9 years, the same genetic effects largely operate at the two ages. PMID:17539370
Haworth, Claire M A; Kovas, Yulia; Petrill, Stephen A; Plomin, Robert
2007-02-01
A previous publication reported the etiology of mathematics performance in 7-year-old twins (Oliver et al., 2004). As part of the same longitudinal study we investigated low mathematics performance and normal variation in a representative United Kingdom sample of 1713 same-sex 9-year-old twins based on teacher-assessed National Curriculum standards. Univariate individual differences and DeFries-Fulker extremes analyses were performed. Similar to our results at 7 years, all mathematics scores at 9 years showed high heritability (.62-.75) and low shared environmental estimates (.00-.11) for both the low performance group and the full sample. Longitudinal analyses were performed from 7 to 9 years. These longitudinal analyses indicated strong genetic continuity from 7 to 9 years for both low performance and mathematics in the normal range. We conclude that, despite the considerable differences in mathematics curricula from 7 to 9 years, the same genetic effects largely operate at the two ages.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2002-01-01
A recently developed variationally stable quasi-relativistic method, which is based on the low-order approximation to the method of normalized elimination of the small component, was incorporated into density functional theory (DFT). The new method was tested for diatomic molecules involving Ag, Cd, Au, and Hg by calculating equilibrium bond lengths, vibrational frequencies, and dissociation energies. The method is easy to implement into standard quantum chemical programs and leads to accurate results for the benchmark systems studied.
1983-06-16
has been advocated by Gnanadesikan and ilk (1969), and others in the literature. This suggests that, if we use the formal signficance test type...American Statistical Asso., 62, 1159-1178. Gnanadesikan , R., and Wilk, M..B. (1969). Data Analytic Methods in Multi- variate Statistical Analysis. In
Thickness related textural properties of retinal nerve fiber layer in color fundus images.
Odstrcilik, Jan; Kolar, Radim; Tornow, Ralf-Peter; Jan, Jiri; Budai, Attila; Mayer, Markus; Vodakova, Martina; Laemmer, Robert; Lamos, Martin; Kuna, Zdenek; Gazarek, Jiri; Kubena, Tomas; Cernosek, Pavel; Ronzhina, Marina
2014-09-01
Images of ocular fundus are routinely utilized in ophthalmology. Since an examination using fundus camera is relatively fast and cheap procedure, it can be used as a proper diagnostic tool for screening of retinal diseases such as the glaucoma. One of the glaucoma symptoms is progressive atrophy of the retinal nerve fiber layer (RNFL) resulting in variations of the RNFL thickness. Here, we introduce a novel approach to capture these variations using computer-aided analysis of the RNFL textural appearance in standard and easily available color fundus images. The proposed method uses the features based on Gaussian Markov random fields and local binary patterns, together with various regression models for prediction of the RNFL thickness. The approach allows description of the changes in RNFL texture, directly reflecting variations in the RNFL thickness. Evaluation of the method is carried out on 16 normal ("healthy") and 8 glaucomatous eyes. We achieved significant correlation (normals: ρ=0.72±0.14; p≪0.05, glaucomatous: ρ=0.58±0.10; p≪0.05) between values of the model predicted output and the RNFL thickness measured by optical coherence tomography, which is currently regarded as a standard glaucoma assessment device. The evaluation thus revealed good applicability of the proposed approach to measure possible RNFL thinning. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Parallel gene analysis with allele-specific padlock probes and tag microarrays
Banér, Johan; Isaksson, Anders; Waldenström, Erik; Jarvius, Jonas; Landegren, Ulf; Nilsson, Mats
2003-01-01
Parallel, highly specific analysis methods are required to take advantage of the extensive information about DNA sequence variation and of expressed sequences. We present a scalable laboratory technique suitable to analyze numerous target sequences in multiplexed assays. Sets of padlock probes were applied to analyze single nucleotide variation directly in total genomic DNA or cDNA for parallel genotyping or gene expression analysis. All reacted probes were then co-amplified and identified by hybridization to a standard tag oligonucleotide array. The technique was illustrated by analyzing normal and pathogenic variation within the Wilson disease-related ATP7B gene, both at the level of DNA and RNA, using allele-specific padlock probes. PMID:12930977
van den Besselaar, A M H P; Chantarangkul, V; Angeloni, F; Binder, N B; Byrne, M; Dauer, R; Gudmundsdottir, B R; Jespersen, J; Kitchen, S; Legnani, C; Lindahl, T L; Manning, R A; Martinuzzo, M; Panes, O; Pengo, V; Riddell, A; Subramanian, S; Szederjesi, A; Tantanate, C; Herbel, P; Tripodi, A
2018-01-01
Essentials Two candidate International Standards for thromboplastin (coded RBT/16 and rTF/16) are proposed. International Sensitivity Index (ISI) of proposed standards was assessed in a 20-centre study. The mean ISI for RBT/16 was 1.21 with a between-centre coefficient of variation of 4.6%. The mean ISI for rTF/16 was 1.11 with a between-centre coefficient of variation of 5.7%. Background The availability of International Standards for thromboplastin is essential for the calibration of routine reagents and hence the calculation of the International Normalized Ratio (INR). Stocks of the current Fourth International Standards are running low. Candidate replacement materials have been prepared. This article describes the calibration of the proposed Fifth International Standards for thromboplastin, rabbit, plain (coded RBT/16) and for thromboplastin, recombinant, human, plain (coded rTF/16). Methods An international collaborative study was carried out for the assignment of International Sensitivity Indexes (ISIs) to the candidate materials, according to the World Health Organization (WHO) guidelines for thromboplastins and plasma used to control oral anticoagulant therapy with vitamin K antagonists. Results Results were obtained from 20 laboratories. In several cases, deviations from the ISI calibration model were observed, but the average INR deviation attributabled to the model was not greater than 10%. Only valid ISI assessments were used to calculate the mean ISI for each candidate. The mean ISI for RBT/16 was 1.21 (between-laboratory coefficient of variation [CV]: 4.6%), and the mean ISI for rTF/16 was 1.11 (between-laboratory CV: 5.7%). Conclusions The between-laboratory variation of the ISI for candidate material RBT/16 was similar to that of the Fourth International Standard (RBT/05), and the between-laboratory variation of the ISI for candidate material rTF/16 was slightly higher than that of the Fourth International Standard (rTF/09). The candidate materials have been accepted by WHO as the Fifth International Standards for thromboplastin, rabbit plain, and thromboplastin, recombinant, human, plain. © 2017 International Society on Thrombosis and Haemostasis.
NASA Astrophysics Data System (ADS)
Carney, G. D.; Adler-Golden, S. M.; Lesseski, D. C.
1986-04-01
This paper reports (1) improved values for low-lying vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) calculated using the variational method and Simons-Parr-Finlan (1973) representations of the Carney-Porter (1976) and Dykstra-Swope (1979) ab initio H3(+) potential energy surfaces, (2) quartic normal coordinate force fields for isotopic H3(+) molecules, (3) comparisons of variational and second-order perturbation theory, and (4) convergence properties of the Lai-Hagstrom internal coordinate vibrational Hamiltonian. Standard deviations between experimental and ab initio fundamental vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) for these potential surfaces are 6.9 (Carney-Porter) and 1.2/cm (Dykstra-Swope). The standard deviations between perturbation theory and exact variational fundamentals are 5 and 10/cm for the respective surfaces. The internal coordinate Hamiltonian is found to be less efficient than the previously employed 't' coordinate Hamiltonian for these molecules, except in the case of H2D(+).
Canali, Inesângela; Petersen Schmidt Rosito, Letícia; Siliprandi, Bruno; Giugno, Cláudia; Selaimen da Costa, Sady
The diagnosis of Eustachian tube dysfunctions is essential for better understanding of the pathogenesis of chronic otitis media. A series of tests to assess tube function are described in the literature; however, they are methodologically heterogeneous, with differences ranging from application protocols to standardization of tests and their results. To evaluate the variation in middle ear pressure in patients with tympanic membrane retraction and in normal patients during tube function tests, as well as to evaluate intra-individual variation between these tests. An observational, contemporary, cross-sectional study was conducted, in which the factor under study was the variation in middle ear pressure during tube function tests (Valsalva maneuver, sniff test, Toynbee maneuver) in healthy patients and in patients with mild and moderate/severe tympanic retraction. A total of 38 patients (76 ears) were included in the study. Patients underwent tube function tests at two different time points to determine pressure measurements after each maneuver. Statistical analysis was performed using SPSS software, version 18.0, considering p-values <0.05 as statistically significant. Mean (standard deviation) age was 11 (2.72) years; 55.3% of patients were male and 44.7% female. The prevalence of type A tympanogram was higher among participants with healthy ears and those with mild retraction, whereas type C tympanograms were more frequent in the moderate/severe retraction group. An increase in middle ear pressure was observed during the Valsalva maneuver at the first time point evaluated in all three groups of ears (p=0.012). The variation in pressure was not significant either for the sniff test or for the Toynbee maneuver at the two time points evaluated (p≥0.05). Agreement between measurements obtained at the two different time points was weak to moderate for all tests in all three groups of ears, and the variations in discrepancy between measurements were higher in ears with moderate/severe tympanic retraction. In this study population, the mean pressure in the middle ear showed significant variation only during the Valsalva maneuver at the first time point evaluated in the three groups of ears. Normal ears and those with mild retraction behaved similarly in all tests. The tested maneuvers exhibited weak to moderate intra-individual variation, with the greatest variation occurring in ears with moderate/severe retraction. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Gun Testing Ballistics Issues for Insensitive Munitions Fragment Impact Testing
NASA Astrophysics Data System (ADS)
Baker, Ernest; Schultz, Emmanuel; NATO Munitions Safety Information Analysis Centre Team
2017-06-01
The STANAG 4496 Ed. 1 Fragment Impact, Munitions Test Procedure is normally conducted by gun launching a projectile for attack against a munition. The purpose of this test is to assess the reaction of a munition impacted by a fragment. The test specifies a standardized projectile (fragment) with a standard test velocity of 2530+/-90 m/s, or an alternate test velocity of 1830+/-60 m/s. The standard test velocity can be challenging to achieve and has several loosely defined and undefined characteristics that can affect the test item response. This publication documents the results of an international review of the STANAG 4496 related to the fragment impact test. To perform the review, MSIAC created a questionnaire in conjunction with the custodian of this STANAG and sent it to test centers. Fragment velocity variation, projectile tilt upon impact and aim point variation were identified as observed gun testing issues. Achieving 2530 m/s consistently and cost effectively can be challenging. The aim point of impact of the fragment is chosen with the objective of obtaining the most violent reaction. No tolerance for aim point is specified, although aim point variation can be a source for IM response variation. Fragment tilt on impact is also unspecified. The standard fragment fabricated from a variety of different steels which have a significant margin for mechanical properties. These, as well as other gun testing issues, have significant implications to resulting IM response.
The affect of tissue depth variation on craniofacial reconstructions.
Starbuck, John M; Ward, Richard E
2007-10-25
We examined the affect of tissue depth variation on the reconstruction of facial form, through the application of the American method, utilizing published tissue depth measurements for emaciated, normal, and obese faces. In this preliminary study, three reconstructions were created on reproductions of the same skull for each set of tissue depth measurements. The resulting morphological variation was measured quantitatively using the anthropometric craniofacial variability index (CVI). This method employs 16 standard craniofacial anthropometric measurements and the results reflect "pattern variation" or facial harmony. We report no appreciable variation in the quantitative measure of the pattern facial form obtained from the three different sets of tissue depths. Facial similarity was assessed qualitatively utilizing surveys of photographs of the three reconstructions. Surveys indicated that subjects frequently perceived the reconstructions as representing different individuals. This disagreement indicates that size of the face may blind observers to similarities in facial form. This research is significant because it illustrates the confounding effect that normal human variation contributes in the successful recognition of individuals from a representational three-dimensional facial reconstruction. Research results suggest that successful identification could be increased if multiple reconstructions were created which reflect a wide range of possible outcomes for facial form. The creation of multiple facial images, from a single skull, will be facilitated as computerized versions of facial reconstruction are further developed and refined.
ERIC Educational Resources Information Center
Sharma, Kshitij; Chavez-Demoulin, Valérie; Dillenbourg, Pierre
2017-01-01
The statistics used in education research are based on central trends such as the mean or standard deviation, discarding outliers. This paper adopts another viewpoint that has emerged in statistics, called extreme value theory (EVT). EVT claims that the bulk of normal distribution is comprised mainly of uninteresting variations while the most…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polsdofer, E; Crilly, R
Purpose: This study investigates the effect of eye size and eccentricity on doses to critical tissues by simulating doses in the Plaque Simulator (v. 6.3.1) software. Present OHSU plaque brachytherapy treatment focuses on delivering radiation to the tumor measured with ocular ultrasound plus a small margin and assumes the orbit has the dimensions of a “standard eye.” Accurately modeling the dimensions of the orbit requires a high resolution ocular CT. This study quantifies how standard differences in equatorial diameters and eccentricity affect calculated doses to critical structures in order to query the justification of the additional CT scan to themore » treatment planning process. Methods: Tumors of 10 mm × 10 mm × 5 mm were modeled at the 12:00:00 hour with a latitude of 45 degrees. Right eyes were modeled at a number of equatorial diameters from 17.5 to 28 mm for each of the standard non-notched COMS plaques with silastic inserts. The COMS plaques were fully loaded with uniform activity, centered on the tumor, and prescribed to a common tumor dose (85 Gy/100 hours). Variations in the calculated doses to normal structures were examined to see if the changes were significant. Results: The calculated dose to normal structures show a marked dependence on eye geometry. This is exemplified by fovea dose which more than doubled in the smaller eyes and nearly halved in the larger model. Additional significant dependence was found in plaque size on the calculated dose in spite of all plaques giving the same dose to the prescription point. Conclusion: The variation in dose with eye dimension fully justifies the addition of a high resolution ocular CT to the planning technique. Additional attention must be made to plaque size beyond simply covering the tumor when considering normal tissue dose.« less
Measuring lip force by oral screens. Part 1: Importance of screen size and individual variability.
Wertsén, Madeleine; Stenberg, Manne
2017-06-01
To reduce drooling and facilitate food transport in rehabilitation of patients with oral motor dysfunction, lip force can be trained using an oral screen. Longitudinal studies evaluating the effect of training require objective methods. The aim of this study was to evaluate a method for measuring lip strength, to investigate normal values and fluctuation of lip force in healthy adults on 1 occasion and over time, to study how the size of the screen affects the force, to evaluate the most appropriate measure of reliability, and to identify force performed in relation to gender. Three different sizes of oral screens were used to measure the lip force for 24 healthy adults on 3 different occasions, during a period of 6 months, using an apparatus based on strain gauge. The maximum lip force as evaluated with this method depends on the area of the screen size. By calculating the projected area of the screen, the lip force could be normalized to an oral screen pressure quantity expressed in kPa, which can be used for comparing measurements from screens with different sizes. Both the mean value and standard deviation were shown to vary between individuals. The study showed no differences regarding gender and only small variation with age. Normal variation over time (months) may be up to 3 times greater than the standard error of measurement at a certain occasion. The lip force increases in relation to the projected area of the screen. No general standard deviation can be assigned to the method and all measurements should be analyzed individually based on oral screen pressure to compensate for different screen sizes.
An enzyme-linked immunosorbent assay for the quantification of serum platelet-bindable IgG.
Howe, S E; Lynch, D M; Lynch, J M
1984-01-01
An enzyme-linked immunosorbent assay (ELISA) using F(ab')2 peroxidase-labeled antihuman immunoglobulin and o-phenylenediamine dihydrochloride (OPD) as a substrate was developed to measure serum platelet bindable IgG (S-PBIgG). The assay was made quantitative by standardizing the number of normal "target" platelets bound to microtiter plate wells, and by incorporating quantitated IgG standards with each microtiter plate tested to prepare a standard calibration curve. By this method, S-PBIgG for normal individuals was 3.4 +/- 1.6 fg per platelet (mean +/- 1 SD; n = 40). Increased S-PBIgG levels were detected in 36 of 40 patients with clinical autoimmune thrombocytopenia (ATP), ranging from 7.0 to 85 fg per platelet. Normal S-PBIgG levels were found in 34 of 40 patients with nonimmune thrombocytopenia. This method showed a sensitivity of 90 percent, specificity of 85 percent, and in the sample population studied, a positive predictive value of 0.86 and a negative predictive value of 0.90. This assay is highly reproducible (coefficient of variation was 6.8%) and appears useful in the evaluation of patients with suspected immune-mediated thrombocytopenia.
Normalization of RNA-seq data using factor analysis of control genes or samples
Risso, Davide; Ngai, John; Speed, Terence P.; Dudoit, Sandrine
2015-01-01
Normalization of RNA-seq data has proven essential to ensure accurate inference of expression levels. Here we show that usual normalization approaches mostly account for sequencing depth and fail to correct for library preparation and other more-complex unwanted effects. We evaluate the performance of the External RNA Control Consortium (ERCC) spike-in controls and investigate the possibility of using them directly for normalization. We show that the spike-ins are not reliable enough to be used in standard global-scaling or regression-based normalization procedures. We propose a normalization strategy, remove unwanted variation (RUV), that adjusts for nuisance technical effects by performing factor analysis on suitable sets of control genes (e.g., ERCC spike-ins) or samples (e.g., replicate libraries). Our approach leads to more-accurate estimates of expression fold-changes and tests of differential expression compared to state-of-the-art normalization methods. In particular, RUV promises to be valuable for large collaborative projects involving multiple labs, technicians, and/or platforms. PMID:25150836
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brzoska, B.; Depisch, F.; Fuchs, H.P.
To analyze the influence of prepressurization on fuel rod behavior, a parametric study has been performed that considers the effects of as-fabricated fuel rod internal prepressure on the normal operation and postulated loss-of-coolant accident (LOCA) rod behavior of a 1300-MW(electric) Kraftwerk Union (KWU) standard pressurized water reactor nuclear power plant. A variation of the prepressure in the range from 15 to 35 bars has only a slight influence on normal operation behavior. Considering the LOCA behavior, only a small temperature increase results from prepressure reduction, while the core-wide straining behavior is improved significantly. The KWU prepressurization takes both conditions intomore » account.« less
Hawking radiation and classical tunneling: A ray phase space approach
NASA Astrophysics Data System (ADS)
Tracy, E. R.; Zhigunov, D.
2016-01-01
Acoustic waves in fluids undergoing the transition from sub- to supersonic flow satisfy governing equations similar to those for light waves in the immediate vicinity of a black hole event horizon. This acoustic analogy has been used by Unruh and others as a conceptual model for "Hawking radiation." Here, we use variational methods, originally introduced by Brizard for the study of linearized MHD, and ray phase space methods, to analyze linearized acoustics in the presence of background flows. The variational formulation endows the evolution equations with natural Hermitian and symplectic structures that prove useful for later analysis. We derive a 2 × 2 normal form governing the wave evolution in the vicinity of the "event horizon." This shows that the acoustic model can be reduced locally (in ray phase space) to a standard (scalar) tunneling process weakly coupled to a unidirectional non-dispersive wave (the "incoming wave"). Given the normal form, the Hawking "thermal spectrum" can be derived by invoking standard tunneling theory, but only by ignoring the coupling to the incoming wave. Deriving the normal form requires a novel extension of the modular ray-based theory used previously to study tunneling and mode conversion in plasmas. We also discuss how ray phase space methods can be used to change representation, which brings the problem into a form where the wave functions are less singular than in the usual formulation, a fact that might prove useful in numerical studies.
Finding a Needle in a Climate Haystack
NASA Astrophysics Data System (ADS)
Verosub, K. L.; Medrano, R.; Valentine, M.
2014-12-01
We are studying the regional impact of volcanic eruptions that might have caused global cooling using high-quality annual-resolution proxy records of natural phenomena, such as tree-ring widths, and cultural events, such as the dates of the beginning of grape and rye harvests. To do this we need to determine if the year following an eruption was significantly colder and wetter than preceding or subsequent years as measured by any given proxy and if that year is consistently cold and wet across different proxies. The problem is complicated by the fact that normal inter-annual variations in any given proxy can be quite large and can obscure any volcanological impact and by the fact that inter-annual variations for different proxies will have different means and standard deviations. We address the first problem by assuming that on a regional scale, the inter-annual variations of different proxies are at best only weakly correlated and that, in the absence of a volcanological signal, these variations will average out on a regional scale. We address the second problem by renormalizing each record so that it has the same mean and standard deviation over a given time interval. We then sum the re-normalized records on a year-by-year basis and look for years with significantly higher total scores. The method can also be used to assess the statistical significance of an anomalous value. Our initial analysis of records primarily from the Northern Hemisphere shows that the years 1601 and 1816 were significantly colder and wetter than any others in the past 500 years. These years followed the eruptions of Huayanaputina in Chile and Tambora in Indonesia, respectively, by one year. The years 1698 and 1837 also show up as being climatologically severe although they have not (yet) been associated with specific volcanic eruptions.
Flethøj, Mette; Kanters, Jørgen K; Pedersen, Philip J; Haugaard, Maria M; Carstensen, Helena; Olsen, Lisbeth H; Buhl, Rikke
2016-11-28
Although premature beats are a matter of concern in horses, the interpretation of equine ECG recordings is complicated by a lack of standardized analysis criteria and a limited knowledge of the normal beat-to-beat variation of equine cardiac rhythm. The purpose of this study was to determine the appropriate threshold levels of maximum acceptable deviation of RR intervals in equine ECG analysis, and to evaluate a novel two-step timing algorithm by quantifying the frequency of arrhythmias in a cohort of healthy adult endurance horses. Beat-to-beat variation differed considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs <60 beats/min (bpm) the threshold level of RR deviation was set at 20%, for HRs in the intermediate range between 60 and 100 bpm the threshold was 10%, and for exercising HRs >100 bpm, the threshold level was 4%. Supraventricular premature beats represented the most prevalent arrhythmia category with varying frequencies in seven horses at rest (median 7, range 2-86) and six horses during exercise (median 2, range 1-24). Beat-to-beat variation of equine cardiac rhythm varies according to HR, and threshold levels in equine ECG analysis should be adjusted accordingly. Standardization of the analysis criteria will enable comparisons of studies and follow-up examinations of patients. A small number of supraventricular premature beats appears to be a normal finding in endurance horses. Further studies are required to validate the findings and determine the clinical significance of premature beats in horses.
Lack of transferability between two automated immunoassays for serum IGF-I measurement.
Gomez-Gomez, Carolina; Iglesias, Eva M; Barallat, Jaume; Moreno, Fernando; Biosca, Carme; Pastor, Mari-Cruz; Granada, Maria-Luisa
2014-01-01
IGF-I is a clinically relevant protein in the diagnosis and monitoring of treatment of growth disor- ders. The Growth Hormone Research Society and the International IGF Research Society have encouraged the adoption of a universal calibration for immunoassays to improve standardization of IGF-I measurements, but currently commercial assays are calibrated either against the old WHO IRR 87/518 or the new WHO 02/254. We compared two IGF-I immunochemiluminescent assays: IMMULITE® 2000 (Siemens) and LIAISON® (DiaSorin), which differ in their standardization, and verified their precision according to quality specifications based on biological variation and their linear range. 62 patient serum samples were analyzed for both assays and compared according to standards of the Clinical and Laboratory Standards Institute (CLSI), EP9-A2-IR. Precision was verified according to CLSI EP15- A2. Optimal coefficient of variation (CVo) and desirable coefficient of variation (CVd) for IGF-I assays were calculated as quality specifications based on the biological variability, in order to assess if the interassay analytical CV (CVa1) in the two methods were appropriate. Two dilution series using the 1st WHO International Standard (WHO IS) for IGF-I 02/254 were used to verify and compare the linearity range. The regression analysis showed constant and proportional differences for serum samples (slope b = 0.8115 (CI 95% CI; 0.7575-0.8556); intercept a = 33.6873 (95% CI: 23.3613-44.0133) between assays and similar pro- portional differences for WHO IS 02/254 standard dilutions series (slope b = 0.8024 (CI 95% CI; 0.7560-0.8616); intercept a = 6.9623 (95% CI: -2.0819-18.4383) between assays. Within-laboratory coefficients of variation for low and high levels were 2.82% and 3.80% for IMMULITE® 2000 and 3.58% and 2.14% for LIAISON®, respecttively. IGF-I concentrations measured by both assays are not transferable. The results emphasize the need to express IGF-I concentrations in standard deviation score (SDS) according to a matched normal population of the same age and gender. Within-laboratory precision in both methods met quality specifications derived from biological variation.
How to normalize metatranscriptomic count data for differential expression analysis.
Klingenberg, Heiner; Meinicke, Peter
2017-01-01
Differential expression analysis on the basis of RNA-Seq count data has become a standard tool in transcriptomics. Several studies have shown that prior normalization of the data is crucial for a reliable detection of transcriptional differences. Until now it has not been clear whether and how the transcriptomic approach can be used for differential expression analysis in metatranscriptomics. We propose a model for differential expression in metatranscriptomics that explicitly accounts for variations in the taxonomic composition of transcripts across different samples. As a main consequence the correct normalization of metatranscriptomic count data under this model requires the taxonomic separation of the data into organism-specific bins. Then the taxon-specific scaling of organism profiles yields a valid normalization and allows us to recombine the scaled profiles into a metatranscriptomic count matrix. This matrix can then be analyzed with statistical tools for transcriptomic count data. For taxon-specific scaling and recombination of scaled counts we provide a simple R script. When applying transcriptomic tools for differential expression analysis directly to metatranscriptomic data with an organism-independent (global) scaling of counts the resulting differences may be difficult to interpret. The differences may correspond to changing functional profiles of the contributing organisms but may also result from a variation of taxonomic abundances. Taxon-specific scaling eliminates this variation and therefore the resulting differences actually reflect a different behavior of organisms under changing conditions. In simulation studies we show that the divergence between results from global and taxon-specific scaling can be drastic. In particular, the variation of organism abundances can imply a considerable increase of significant differences with global scaling. Also, on real metatranscriptomic data, the predictions from taxon-specific and global scaling can differ widely. Our studies indicate that in real data applications performed with global scaling it might be impossible to distinguish between differential expression in terms of transcriptomic changes and differential composition in terms of changing taxonomic proportions. As in transcriptomics, a proper normalization of count data is also essential for differential expression analysis in metatranscriptomics. Our model implies a taxon-specific scaling of counts for normalization of the data. The application of taxon-specific scaling consequently removes taxonomic composition variations from functional profiles and therefore provides a clear interpretation of the observed functional differences.
Meyer, Swanhild U.; Kaiser, Sebastian; Wagner, Carola; Thirion, Christian; Pfaffl, Michael W.
2012-01-01
Background Adequate normalization minimizes the effects of systematic technical variations and is a prerequisite for getting meaningful biological changes. However, there is inconsistency about miRNA normalization performances and recommendations. Thus, we investigated the impact of seven different normalization methods (reference gene index, global geometric mean, quantile, invariant selection, loess, loessM, and generalized procrustes analysis) on intra- and inter-platform performance of two distinct and commonly used miRNA profiling platforms. Methodology/Principal Findings We included data from miRNA profiling analyses derived from a hybridization-based platform (Agilent Technologies) and an RT-qPCR platform (Applied Biosystems). Furthermore, we validated a subset of miRNAs by individual RT-qPCR assays. Our analyses incorporated data from the effect of differentiation and tumor necrosis factor alpha treatment on primary human skeletal muscle cells and a murine skeletal muscle cell line. Distinct normalization methods differed in their impact on (i) standard deviations, (ii) the area under the receiver operating characteristic (ROC) curve, (iii) the similarity of differential expression. Loess, loessM, and quantile analysis were most effective in minimizing standard deviations on the Agilent and TLDA platform. Moreover, loess, loessM, invariant selection and generalized procrustes analysis increased the area under the ROC curve, a measure for the statistical performance of a test. The Jaccard index revealed that inter-platform concordance of differential expression tended to be increased by loess, loessM, quantile, and GPA normalization of AGL and TLDA data as well as RGI normalization of TLDA data. Conclusions/Significance We recommend the application of loess, or loessM, and GPA normalization for miRNA Agilent arrays and qPCR cards as these normalization approaches showed to (i) effectively reduce standard deviations, (ii) increase sensitivity and accuracy of differential miRNA expression detection as well as (iii) increase inter-platform concordance. Results showed the successful adoption of loessM and generalized procrustes analysis to one-color miRNA profiling experiments. PMID:22723911
Isotope-abundance variations and atomic weights of selected elements: 2016 (IUPAC Technical Report)
Coplen, Tyler B.; Shrestha, Yesha
2016-01-01
There are 63 chemical elements that have two or more isotopes that are used to determine their standard atomic weights. The isotopic abundances and atomic weights of these elements can vary in normal materials due to physical and chemical fractionation processes (not due to radioactive decay). These variations are well known for 12 elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, bromine, and thallium), and the standard atomic weight of each of these elements is given by IUPAC as an interval with lower and upper bounds. Graphical plots of selected materials and compounds of each of these elements have been published previously. Herein and at the URL http://dx.doi.org/10.5066/F7GF0RN2, we provide isotopic abundances, isotope-delta values, and atomic weights for each of the upper and lower bounds of these materials and compounds.
Improved tomographic reconstructions using adaptive time-dependent intensity normalization.
Titarenko, Valeriy; Titarenko, Sofya; Withers, Philip J; De Carlo, Francesco; Xiao, Xianghui
2010-09-01
The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.
Experimental studies of breaking of elastic tired wheel under variable normal load
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.
2017-10-01
The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.
Abuasbi, Falastine; Lahham, Adnan; Abdel-Raziq, Issam Rashid
2018-04-01
This study was focused on the measurement of residential exposure to power frequency (50-Hz) electric and magnetic fields in the city of Ramallah-Palestine. A group of 32 semi-randomly selected residences distributed amongst the city were under investigations of fields variations. Measurements were performed with the Spectrum Analyzer NF-5035 and were carried out at one meter above ground level in the residence's bedroom or living room under both zero and normal-power conditions. Fields' variations were recorded over 6-min and some times over few hours. Electric fields under normal-power use were relatively low; ~59% of residences experienced mean electric fields <10 V/m. The highest mean electric field of 66.9 V/m was found at residence R27. However, electric field values were log-normally distributed with geometric mean and geometric standard deviation of 9.6 and 3.5 V/m, respectively. Background electric fields measured under zero-power use, were very low; ~80% of residences experienced background electric fields <1 V/m. Under normal-power use, the highest mean magnetic field (0.45 μT) was found at residence R26 where an indoor power substation exists. However, ~81% of residences experienced mean magnetic fields <0.1 μT. Magnetic fields measured inside the 32 residences showed also a log-normal distribution with geometric mean and geometric standard deviation of 0.04 and 3.14 μT, respectively. Under zero-power conditions, ~7% of residences experienced average background magnetic field >0.1 μT. Fields from appliances showed a maximum mean electric field of 67.4 V/m from hair dryer, and maximum mean magnetic field of 13.7 μT from microwave oven. However, no single result surpassed the ICNIRP limits for general public exposures to ELF fields, but still, the interval 0.3-0.4 μT for possible non-thermal health impacts of exposure to ELF magnetic fields, was experienced in 13% of the residences.
Evaluating acoustic speaker normalization algorithms: evidence from longitudinal child data.
Kohn, Mary Elizabeth; Farrington, Charlie
2012-03-01
Speaker vowel formant normalization, a technique that controls for variation introduced by physical differences between speakers, is necessary in variationist studies to compare speakers of different ages, genders, and physiological makeup in order to understand non-physiological variation patterns within populations. Many algorithms have been established to reduce variation introduced into vocalic data from physiological sources. The lack of real-time studies tracking the effectiveness of these normalization algorithms from childhood through adolescence inhibits exploration of child participation in vowel shifts. This analysis compares normalization techniques applied to data collected from ten African American children across five time points. Linear regressions compare the reduction in variation attributable to age and gender for each speaker for the vowels BEET, BAT, BOT, BUT, and BOAR. A normalization technique is successful if it maintains variation attributable to a reference sociolinguistic variable, while reducing variation attributable to age. Results indicate that normalization techniques which rely on both a measure of central tendency and range of the vowel space perform best at reducing variation attributable to age, although some variation attributable to age persists after normalization for some sections of the vowel space. © 2012 Acoustical Society of America
Multicentre imaging measurements for oncology and in the brain
Tofts, P S; Collins, D J
2011-01-01
Multicentre imaging studies of brain tumours (and other tumour and brain studies) can enable a large group of patients to be studied, yet they present challenging technical problems. Differences between centres can be characterised, understood and minimised by use of phantoms (test objects) and normal control subjects. Normal white matter forms an excellent standard for some MRI parameters (e.g. diffusion or magnetisation transfer) because the normal biological range is low (<2–3%) and the measurements will reflect this, provided the acquisition sequence is controlled. MR phantoms have benefits and they are necessary for some parameters (e.g. tumour volume). Techniques for temperature monitoring and control are given. In a multicentre study or treatment trial, between-centre variation should be minimised. In a cross-sectional study, all groups should be represented at each centre and the effect of centre added as a covariate in the statistical analysis. In a serial study of disease progression or treatment effect, individual patients should receive all of their scans at the same centre; the power is then limited by the within-subject reproducibility. Sources of variation that are generic to any imaging method and analysis parameters include MR sequence mismatch, B1 errors, CT effective tube potential, region of interest generation and segmentation procedure. Specific tissue parameters are analysed in detail to identify the major sources of variation and the most appropriate phantoms or normal studies. These include dynamic contrast-enhanced and dynamic susceptibility contrast gadolinium imaging, T1, diffusion, magnetisation transfer, spectroscopy, tumour volume, arterial spin labelling and CT perfusion. PMID:22433831
NASA Astrophysics Data System (ADS)
Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.
2011-12-01
Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with higher SpCs.
Morphological variations of papillary muscles in the mitral valve complex in human cadaveric hearts.
Gunnal, Sandhya Arvind; Wabale, Rajendra Namdeo; Farooqui, Mujeebuddin Samsamuddin
2013-01-01
Papillary muscle rupture and dysfunction can lead to complications of prolapsed mitral valve and mitral regurgitation. Multiple operative procedures of the papillary muscles, such as resection, repositioning and realignment, are carried out to restore normal physiological function. Therefore, it is important to know both the variations and the normal anatomy of papillary muscles. This study was carried out on 116 human cadaveric hearts. The left ventricles were opened along the left border in order to view the papillary muscles. The number, shape, position and pattern of the papillary muscles were observed. In this series, the papillary muscles were mostly found in groups instead of in twos, as is described in standard textbooks. Four different shapes of papillary muscles were identified - conical, broad-apexed, pyramidal and fan-shaped. We also discovered various patterns of papillary muscles. No two mitral valve complexes have the same architectural arrangement. Each case seems to be unique. Therefore, it is important for scientists worldwide to study the variations in the mitral valve complex in order to ascertain the reason behind each specific architectural arrangement. This will enable cardiothoracic surgeons to tailor the surgical procedures according to the individual papillary muscle pattern.
NASA Technical Reports Server (NTRS)
Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.
1987-01-01
An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.
The effects of complementary and alternative medicine on the speech of patients with depression
NASA Astrophysics Data System (ADS)
Fraas, Michael; Solloway, Michele
2004-05-01
It is well documented that patients suffering from depression exhibit articulatory timing deficits and speech that is monotonous and lacking pitch variation. Traditional remediation of depression has left many patients with adverse side effects and ineffective outcomes. Recent studies indicate that many Americans are seeking complementary and alternative forms of medicine to supplement traditional therapy approaches. The current investigation wishes to determine the efficacy of complementary and alternative medicine (CAM) on the remediation of speech deficits associated with depression. Subjects with depression and normal controls will participate in an 8-week treatment session using polarity therapy, a form of CAM. Subjects will be recorded producing a series of spontaneous and narrative speech samples. Acoustic analysis of mean fundamental frequency (F0), variation in F0 (standard deviation of F0), average rate of F0 change, and pause and utterance durations will be conducted. Differences pre- and post-CAM therapy between subjects with depression and normal controls will be discussed.
Refractivity variations and propagation at Ultra High Frequency
NASA Astrophysics Data System (ADS)
Alam, I.; Najam-Ul-Islam, M.; Mujahid, U.; Shah, S. A. A.; Ul Haq, Rizwan
Present framework is established to deal with the refractivity variations normally affected the radio waves propagation at different frequencies, ranges and different environments. To deal such kind of effects, many researchers proposed several methodologies. One method is to use the parameters from meteorology to investigate these effects of variations in refractivity on propagation. These variations are region specific and we have selected a region of one kilometer height over the English Channel. We have constructed different modified refractivity profiles based on the local meteorological data. We have recorded more than 48 million received signal strength from a communication links of 50 km operating at 2015 MHz in the Ultra High Frequency band giving path loss between transmitting and receiving stations of the experimental setup. We have used parabolic wave equation method to simulate an hourly value of signal strength and compared the obtained simulated loss to the experimental loss. The analysis is made to compute refractivity distribution of standard (STD) and ITU (International Telecommunication Union) refractivity profiles for various evaporation ducts. It is found that a standard refractivity profile is better than the ITU refractivity profiles for the region at 2015 MHz. Further, it is inferred from the analysis of results that 10 m evaporation duct height is the dominant among all evaporation duct heights considered in the research.
A Research Program in Computer Technology
1979-01-01
barrier walls within the cell in a grid or "waffle" pattern, sepnrnting each pixel from its neighbors. The walls need not extend to the front surface...migration and degradation of display p(.rformanco. The grid can be made of photoresist film by standard photolithographic techniques. I xtruurrs. Using the EP...this variation is normally quite smooth, but significant. However, for use in a smart terminal, where visible cursor feedback is available or where
Pulse height response of an optical particle counter to monodisperse aerosols
NASA Technical Reports Server (NTRS)
Wilmoth, R. G.; Grice, S. S.; Cuda, V.
1976-01-01
The pulse height response of a right angle scattering optical particle counter has been investigated using monodisperse aerosols of polystyrene latex spheres, di-octyl phthalate and methylene blue. The results confirm previous measurements for the variation of mean pulse height as a function of particle diameter and show good agreement with the relative response predicted by Mie scattering theory. Measured cumulative pulse height distributions were found to fit reasonably well to a log normal distribution with a minimum geometric standard deviation of about 1.4 for particle diameters greater than about 2 micrometers. The geometric standard deviation was found to increase significantly with decreasing particle diameter.
NASA Technical Reports Server (NTRS)
Idso, S. B.; Jackson, R. D.; Reginato, R. J.
1976-01-01
A procedure is developed for removing data scatter in the thermal-inertia approach to remote sensing of soil moisture which arises from environmental variability in time and space. It entails the utilization of nearby National Weather Service air temperature measurements to normalize measured diurnal surface temperature variations to what they would have been for a day of standard diurnal air temperature variation, arbitrarily assigned to be 18 C. Tests of the procedure's basic premise on a bare loam soil and a crop of alfalfa indicate it to be conceptually sound. It is possible that the technique could also be useful in other thermal-inertia applications, such as lithographic mapping.
Physical Characteristics of Laboratory Tested Concrete as a Substituion of Gravel on Normal Concrete
NASA Astrophysics Data System (ADS)
Butar-butar, Ronald; Suhairiani; Wijaya, Kinanti; Sebayang, Nono
2018-03-01
Concrete technology is highly potential in the field of construction for structural and non-structural construction. The amount uses of this concrete material raise the problem of solid waste in the form of concrete remaining test results in the laboratory. This waste is usually just discarded and not economically valuable. In solving the problem, this experiment was made new materials by using recycle material in the form of recycled aggregate which aims to find out the strength characteristics of the used concrete as a gravel substitution material on the normal concrete and obtain the value of the substitution composition of gravel and used concrete that can achieve the strength of concrete according to the standard. Testing of concrete characteristic is one of the requirements before starting the concrete mixture. This test using SNI method (Indonesian National Standard) with variation of comparison (used concrete : gravel) were 15: 85%, 25: 75%, 35:65%, 50:50 %, 75: 25%. The results of physical tests obtained the mud content value of the mixture gravel and used concrete is 0.03 larger than the standard of SNI 03-4142-1996 that is equal to 1.03%. so the need watering or soaking before use. The water content test results show an increase in the water content value if the composition of the used concrete increases. While the specific gravity value for variation 15: 85% until 35: 65% fulfilled the requirements of SNI 03-1969-1990. the other variasion show the specifics gravity value included on the type of light materials.
Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro
2015-12-01
An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.
Anatomy and Aesthetics of the Labia Minora: The Ideal Vulva?
Clerico, C; Lari, A; Mojallal, A; Boucher, F
2017-06-01
Female genital cosmetic surgery is becoming more and more widespread both in the field of plastic and gynaecological surgery. The increased demand for vulvar surgery is spurred by the belief that the vulva is abnormal in appearance. What is normal in terms of labial anatomy? Labia minora enlargement or hypertrophy remains a clinical diagnosis which is poorly defined as it could be considered a variation of the normal anatomy. Enlarged labia minora can cause functional, aesthetic and psychosocial problems. In reality, given the wide variety of vulvar morphology among people, it is a very subjective issue to define the "normal" vulva. The spread of nudity in the general media plays a major role in creating an artificial image and standards with regard to the ideal form. Physicians should be aware that the patient's self-perception of the normal or ideal vulva is highly influenced by the arguably distorted image related to our socio-psychological environment, as presented to us by the general media and internet. As physicians, we have to educate our patients on the variation of vulvar anatomy and the potential risks of these surgeries. Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these evidence-based medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Plaque formation and removal assessed in vivo in a novel repeated measures imaging methodology.
White, Donald J; Kozak, Kathy M; Baker, Rob; Saletta, Lisa
2006-01-01
A repeated measures digital imaging technique (Digital Plaque Image Analysis) was used to assess variations in plaque formation, including levels of plaque developed following evening and morning tooth brushing with a standard dentifrice, to establish a baseline for future assessments of antimicrobial formulations. Following a rigorous oral hygiene period, subjects were provided with a standard commercial (non-antibacterial) dentifrice and manual toothbrush and instructed to brush b.i.d., as normal. On six separate days over two weeks, subjects reported at three times for a daily plaque assessment: in the morning before oral hygiene, post-brushing, and in the afternoon post-brushing. Morning plaque levels covered approximately 10% of the measured dentition, and plaque was removed by 75% with morning tooth brushing. Plaque underwent rapid regrowth during the day, and averaged approximately 7% coverage by the afternoon. These results support the value of Digital Plaque Image Analysis in recording diurnal plaque variations and treatment effects, and suggest that assessment of oral hygiene efficacy (either mechanical or chemopreventive) accounts for diurnal variations in plaque formation. In addition, the results suggest that plaque regrowth and virulence activity overnight is a significant target for oral hygiene interventions.
Nakling, Jakob; Buhaug, Harald; Backe, Bjorn
2005-10-01
In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.
Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-07-01
Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
Schaefer, Maureen; Hackman, Lucina; Gallagher, John
2016-03-01
This study examines the accuracy of the Pyle and Hoerr radiographic atlas technique in an effort to document the extent of normal variation associated with developmental timings in the knee for purposes of age estimation. The atlas has been previously tested; however, accuracy rates were produced from a dataset, which spread in age from mostly 7-16 years. This study took a closer look at the younger age groups, examining radiographs from 297 children (147 female and 150 male) from birth to 6 years. Standard deviations representing the difference between the skeletal and chronological age were calculated according to two groupings. Each group represents episodes, or time periods, of differential developmental rates as expressed through the number of plates within the atlas dedicated to documenting each year of life. The beginning year of life is characterized by the most rapid of development as represented by the numerous image plates used to depict this time period. Individuals assigned to plates with a skeletal age between birth and 1 year were grouped collectively to document the variation associated with such rapidly changing morphology (SD = 2.5 months in female children; 2.3 months in male children). Years 1-3.8 years (female) and 1-4.5 years (male) were represented by two or three images within the atlas, and therefore, individuals assigned to plates with a skeletal age falling within this range were placed within a second grouping (SD = 5.2 months in female children; 7.0 months in male children). As expected, variation was observed to decrease as developmental processes accelerated in the younger children. The newly calculated standard deviations offer tighter predictions for estimating age in young children while at the same time maintaining an acceptable width that accounts for normal variation in developmental timings.
A catalyzing phantom for reproducible dynamic conversion of hyperpolarized [1-¹³C]-pyruvate.
Walker, Christopher M; Lee, Jaehyuk; Ramirez, Marc S; Schellingerhout, Dawid; Millward, Steven; Bankson, James A
2013-01-01
In vivo real time spectroscopic imaging of hyperpolarized ¹³C labeled metabolites shows substantial promise for the assessment of physiological processes that were previously inaccessible. However, reliable and reproducible methods of measurement are necessary to maximize the effectiveness of imaging biomarkers that may one day guide personalized care for diseases such as cancer. Animal models of human disease serve as poor reference standards due to the complexity, heterogeneity, and transient nature of advancing disease. In this study, we describe the reproducible conversion of hyperpolarized [1-¹³C]-pyruvate to [1-¹³C]-lactate using a novel synthetic enzyme phantom system. The rate of reaction can be controlled and tuned to mimic normal or pathologic conditions of varying degree. Variations observed in the use of this phantom compare favorably against within-group variations observed in recent animal studies. This novel phantom system provides crucial capabilities as a reference standard for the optimization, comparison, and certification of quantitative imaging strategies for hyperpolarized tracers.
Large Calcium Isotopic Variation in Peridotitic Xenoliths from North China Craton
NASA Astrophysics Data System (ADS)
Huang, S.; Zhao, X.; Zhang, Z.
2016-12-01
Calcium is the fifth most abundant element in the Earth. The Ca isotopic composition of the Earth is important in many aspects, ranging from tracing the Ca cycle on the Earth to comparing the Earth to other terrestrial planets. There is large mass-dependent Ca isotopic variation, measured as δ44/40Ca relative to a standard sample, in terrestrial igneous rocks: about 2 per mil in silicate rocks, compared to 3 per mil in carbonates. Therefore, a good understanding of the Ca isotopic variation in igneous rocks is necessary. Here we report Ca isotopic data on a series of peridotitic xenoliths from North China Craton (NCC). There is about 1 per mil δ44/40Ca variation in these NCC peridotites: The highest δ44/40Ca is close to typical mantle values, and the lowest δ44/40Ca is found in an Fe-rich peridotite, -1.13 relative to normal mantle (or -0.08 on the SRM 915a scale). This represents the lowest δ44/40Ca value ever reported for igneous rocks. Combined with published Fe isotopic data on the same samples, our data show a positive linear correlation between δ44/40Ca and δ57/54Fe in NCC peridotites. This trend is inconsistent with mixing a low-δ44/40Ca and -δ57/54Fe sedimentary component with a normal mantle component. Rather, it is best explained as the result of kinetic isotopic effect caused by melt-peridotite reaction on a time scale of several hundreds of years. In detail, basaltic melt reacts with peridotite, replaces orthopyroxene with clinopyroxene, and increases the Fo number of olivine. Consistent with this interpretation, our on-going Mg isotopic study shows that low-δ44/40Ca and -δ57/54Fe NCC peridotites also have heavier Mg isotopes compared to normal mantle. Our study shows that mantle metasomatism plays an important role generating stable isotopic variations within the Earth's mantle.
Abaye, Daniel A; Nielsen, Birthe; Boateng, Joshua S
2013-12-01
Homocysteine (Hcys) is a non-essential amino acid associated with a range of diseased and abnormal metabolic conditions. Hcys concentration in saliva is routinely determined by enzyme assays, which are broadly specific, but can be expensive and suffer from cross-reactivity. Total Hcys (tHcys) concentrations in eight healthy adults were determined to establish the inter-day variation during resting, normal and intensive physical activity, using the more sensitive analytical techniques of liquid chromatography and tandem mass spectrometry without prior derivatization. Saliva (~ 1.5 mL) was collected over four days; early morning (EA), normal activity (NA) and during physical activity (PA). Samples were processed by disulphide reduction, acetonitrile precipitation and then centrifugation-filtration. Extracts were chromatographically resolved and analysed on a quadrupole time- of-flight (QToF) mass spectrometer. The protonated [M+H]+, m/z 136.101 and product ([M+H]+- HCOOH) m/z 90.103 ions were then monitored against an internal standard (13CHcys) and external set of calibration standards. Mean tHcys concentration for the whole group, including exercise was 6.6 ± 8.0 (range 0.2 - 29.6 nmol/mL). Overall, concentration of tHcys was greater in males than the females but not significantly (p > 0.05). The mean EA concentration was significantly (p < 0.05) greater than NA for both males (p = 0340) and females (p = 0.0045). There were large within-subject variations (coefficient of variation; CV%; 24% to 103%). The limits of detection (LOD) and quantification (LOQ) were 0.07 and 0.22 nmol/mL, respectively. The procedure potentially provides a convenient means of analyzing salivary Hcys as a diagnostic disease marker.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Astrophysics Data System (ADS)
Hendricks, R. C.; McDonald, G.
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mcdonald, G.
1981-01-01
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Confidence bounds and hypothesis tests for normal distribution coefficients of variation
Steve Verrill; Richard A. Johnson
2007-01-01
For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.
Variation in the terrestrial isotopic composition and atomic weight of argon
Böhlke, John Karl
2014-01-01
The isotopic composition and atomic weight of argon (Ar) are variable in terrestrial materials. Those variations are a source of uncertainty in the assignment of standard properties for Ar, but they provide useful information in many areas of science. Variations in the stable isotopic composition and atomic weight of Ar are caused by several different processes, including (1) isotope production from other elements by radioactive decay (radiogenic isotopes) or other nuclear transformations (e.g., nucleogenic isotopes), and (2) isotopic fractionation by physical-chemical processes such as diffusion or phase equilibria. Physical-chemical processes cause correlated mass-dependent variations in the Ar isotope-amount ratios (40Ar/36Ar, 38Ar/36Ar), whereas nuclear transformation processes cause non-mass-dependent variations. While atmospheric Ar can serve as an abundant and homogeneous isotopic reference, deviations from the atmospheric isotopic ratios in other Ar occurrences limit the precision with which a standard atomic weight can be given for Ar. Published data indicate variation of Ar atomic weights in normal terrestrial materials between about 39.7931 and 39.9624. The upper bound of this interval is given by the atomic mass of 40Ar, as some samples contain almost pure radiogenic 40Ar. The lower bound is derived from analyses of pitchblende (uranium mineral) containing large amounts of nucleogenic 36Ar and 38Ar. Within this interval, measurements of different isotope ratios (40Ar/36Ar or 38Ar/36Ar) at various levels of precision are widely used for studies in geochronology, water–rock interaction, atmospheric evolution, and other fields.
Braun, T; Dochtermann, S; Krause, E; Schmidt, M; Schorn, K; Hempel, J M
2011-09-01
The present study analyzes the best combination of frequencies for the calculation of mean hearing loss in pure tone threshold audiometry for correlation with hearing loss for numbers in speech audiometry, since the literature describes different calculation variations for plausibility checking in expertise. Three calculation variations, A (250, 500 and 1000 Hz), B (500 and 1000 Hz) and C (500, 1000 and 2000 Hz), were compared. Audiograms in 80 patients with normal hearing, 106 patients with hearing loss and 135 expertise patients were analyzed in a retrospective manner. Differences between mean pure tone audiometry thresholds and hearing loss for numbers were calculated and statistically compared separately for the right and the left ear in the three patient collectives. We found the calculation variation A to be the best combination of frequencies, since it yielded the smallest standard deviations while being statistically different to calculation variations B and C. The 1- and 2.58-fold standard deviation (representing 68.3% and 99.0% of all values) was ±4.6 and ±11.8 dB for calculation variation A in patients with hearing loss, respectively. For plausibility checking in expertise, the mean threshold from the frequencies 250, 500 and 1000 Hz should be compared to the hearing loss for numbers. The common recommendation reported by the literature to doubt plausibility when the difference of these values exceeds ±5 dB is too strict as shown by this study.
Travison, Thomas G.; Vesper, Hubert W.; Orwoll, Eric; Wu, Frederick; Kaufman, Jean Marc; Wang, Ying; Lapauw, Bruno; Fiers, Tom; Matsumoto, Alvin M.
2017-01-01
Background: Reference ranges for testosterone are essential for making a diagnosis of hypogonadism in men. Objective: To establish harmonized reference ranges for total testosterone in men that can be applied across laboratories by cross-calibrating assays to a reference method and standard. Population: The 9054 community-dwelling men in cohort studies in the United States and Europe: Framingham Heart Study; European Male Aging Study; Osteoporotic Fractures in Men Study; and Male Sibling Study of Osteoporosis. Methods: Testosterone concentrations in 100 participants in each of the four cohorts were measured using a reference method at Centers for Disease Control and Prevention (CDC). Generalized additive models and Bland-Altman analyses supported the use of normalizing equations for transformation between cohort-specific and CDC values. Normalizing equations, generated using Passing-Bablok regression, were used to generate harmonized values, which were used to derive standardized, age-specific reference ranges. Results: Harmonization procedure reduced intercohort variation between testosterone measurements in men of similar ages. In healthy nonobese men, 19 to 39 years, harmonized 2.5th, 5th, 50th, 95th, and 97.5th percentile values were 264, 303, 531, 852, and 916 ng/dL, respectively. Age-specific harmonized testosterone concentrations in nonobese men were similar across cohorts and greater than in all men. Conclusion: Harmonized normal range in a healthy nonobese population of European and American men, 19 to 39 years, is 264 to 916 ng/dL. A substantial proportion of intercohort variation in testosterone levels is due to assay differences. These data demonstrate the feasibility of generating harmonized reference ranges for testosterone that can be applied to assays, which have been calibrated to a reference method and calibrator. PMID:28324103
Mineral composition of Atriplex hymenelytra growing in the northern Mojave Desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, A.; Romney, E.M.; Hunter, R.B.
1980-01-01
Fifty samples of Atriplex hymenelytra (Torr.) S. Wats. were collected from several different locations in southern Nevada and California to test variability in mineral composition. Only Na, V, P, Ca, Mg, Mn, and Sr in the samples appeared to represent a uniform population resulting in normal curves for frequency distribution. Even so, about 40 percent of the variance for these elements was due to location. All elements differed enough with location so that no element really represented a uniform population. The coefficient of variation for most elements was over 40 percent and one was over 100 percent. The proportion ofmore » variance due to analytical variation averaged 16.2 +- 13.1 percent (standard deviation), that due to location was 43.0 +- 13.4 percent, and that due to variation of plants within location was 40.7 +- 13.0 percent.« less
Normal Databases for the Relative Quantification of Myocardial Perfusion
Rubeaux, Mathieu; Xu, Yuan; Germano, Guido; Berman, Daniel S.; Slomka, Piotr J.
2016-01-01
Purpose of review Myocardial perfusion imaging (MPI) with SPECT is performed clinically worldwide to detect and monitor coronary artery disease (CAD). MPI allows an objective quantification of myocardial perfusion at stress and rest. This established technique relies on normal databases to compare patient scans against reference normal limits. In this review, we aim to introduce the process of MPI quantification with normal databases and describe the associated perfusion quantitative measures that are used. Recent findings New equipment and new software reconstruction algorithms have been introduced which require the development of new normal limits. The appearance and regional count variations of normal MPI scan may differ between these new scanners and standard Anger cameras. Therefore, these new systems may require the determination of new normal limits to achieve optimal accuracy in relative myocardial perfusion quantification. Accurate diagnostic and prognostic results rivaling those obtained by expert readers can be obtained by this widely used technique. Summary Throughout this review, we emphasize the importance of the different normal databases and the need for specific databases relative to distinct imaging procedures. use of appropriate normal limits allows optimal quantification of MPI by taking into account subtle image differences due to the hardware and software used, and the population studied. PMID:28138354
Atomic weights of the elements 2011 (IUPAC Technical Report)
Wieser, Michael E.; Holden, Norman; Coplen, Tyler B.; Böhlke, John K.; Berglund, Michael; Brand, Willi A.; De Bièvre, Paul; Gröning, Manfred; Loss, Robert D.; Meija, Juris; Hirata, Takafumi; Prohaska, Thomas; Schoenberg, Ronny; O'Connor, Glenda; Walczyk, Thomas; Yoneda, Shige; Zhu, Xiang-Kun
2013-01-01
The biennial review of atomic-weight determinations and other cognate data has resulted in changes for the standard atomic weights of five elements. The atomic weight of bromine has changed from 79.904(1) to the interval [79.901, 79.907], germanium from 72.63(1) to 72.630(8), indium from 114.818(3) to 114.818(1), magnesium from 24.3050(6) to the interval [24.304, 24.307], and mercury from 200.59(2) to 200.592(3). For bromine and magnesium, assignment of intervals for the new standard atomic weights reflects the common occurrence of variations in the atomic weights of those elements in normal terrestrial materials.
Shen, Tingting; Ye, Lanhan; Kong, Wenwen; Wang, Wei; Liu, Xiaodan
2018-01-01
Fast detection of toxic metals in crops is important for monitoring pollution and ensuring food safety. In this study, laser-induced breakdown spectroscopy (LIBS) was used to detect the chromium content in rice leaves. We investigated the influence of laser wavelength (532 nm and 1064 nm excitation), along with the variations of delay time, pulse energy, and lens-to-sample distance (LTSD), on the signal (sensitivity and stability) and plasma features (temperature and electron density). With the optimized experimental parameters, univariate analysis was used for quantifying the chromium content, and several preprocessing methods (including background normalization, area normalization, multiplicative scatter correction (MSC) transformation and standardized normal variate (SNV) transformation were used to further improve the analytical performance. The results indicated that 532 nm excitation showed better sensitivity than 1064 nm excitation, with a detection limit around two times lower. However, the prediction accuracy for both excitation wavelengths was similar. The best result, with a correlation coefficient of 0.9849, root-mean-square error of 3.89 mg/kg and detection limit of 2.72 mg/kg, was obtained using the SNV transformed signal (Cr I 425.43 nm) induced by 532 nm excitation. The results indicate the inspiring capability of LIBS for toxic metals detection in plant materials. PMID:29463032
Comparison of storm-time changes of geomagnetic field at ground and at MAGSAT altitudes
NASA Technical Reports Server (NTRS)
Kane, R. P.; Trivedi, N. B.
1981-01-01
Computations concerning variations of the geomagnetic field at MAGSAT altitudes were investigated. Using MAGSAT data for the X, Y, and Z components of the geomagnetic field, a computer conversion to yield the H component was performed. Two methods of determining delta H normalized to a constant geocentric distance R sub 0 = 6800 were investigated, and the utility of elta H at times of magnetic storms was considered. Delta H at a geographical latitude of 0 at dawn and dusk, the standard Dst, and K sub p histograms were plotted and compared. Magnetic anomalies are considered. Examination of data from the majority of the 400 passes of MAGSAT considered show a reasonable delta H versus latitude variation. Discrepancies in values are discussed.
Pasquali, Matias; Serchi, Tommaso; Planchon, Sebastien; Renaut, Jenny
2017-01-01
The two-dimensional difference gel electrophoresis method is a valuable approach for proteomics. The method, using cyanine fluorescent dyes, allows the co-migration of multiple protein samples in the same gel and their simultaneous detection, thus reducing experimental and analytical time. 2D-DIGE, compared to traditional post-staining 2D-PAGE protocols (e.g., colloidal Coomassie or silver nitrate), provides faster and more reliable gel matching, limiting the impact of gel to gel variation, and allows also a good dynamic range for quantitative comparisons. By the use of internal standards, it is possible to normalize for experimental variations in spot intensities and gel patterns. Here we describe the experimental steps we follow in our routine 2D-DIGE procedure that we then apply to multiple biological questions.
Zahedi, Edmond; Sohani, Vahid; Ali, M A Mohd; Chellappan, Kalaivani; Beng, Gan Kok
2015-01-01
The feasibility of a novel system to reliably estimate the normalized central blood pressure (CBPN) from the radial photoplethysmogram (PPG) is investigated. Right-wrist radial blood pressure and left-wrist PPG were simultaneously recorded in five different days. An industry-standard applanation tonometer was employed for recording radial blood pressure. The CBP waveform was amplitude-normalized to determine CBPN. A total of fifteen second-order autoregressive models with exogenous input were investigated using system identification techniques. Among these 15 models, the model producing the lowest coefficient of variation (CV) of the fitness during the five days was selected as the reference model. Results show that the proposed model is able to faithfully reproduce CBPN (mean fitness = 85.2% ± 2.5%) from the radial PPG for all 15 segments during the five recording days. The low CV value of 3.35% suggests a stable model valid for different recording days.
Saturn Orbits Car Making into the Twenty-First Century. A Case Study
1993-04-01
two engine variations of the 1.9 liter four-cylinder aluminum block, a standard 85-horsepower, single overhead camshaft (SOHC) 8-valve and a high...performance, 124-horsepower, dual overhead camshafts (DOHC) 16-valve version. Its optional anti-lock braking system was a safety addition not normally found...Treece, James B. "The Planets May be Perfectly Aligned For Saturn’s Lift-Off." Business Week Oct. 22, 1990: 40. Tree %.e, James B. "War, Recession
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
Copy Number Variations of TBK1 in Australian Patients With Primary Open-Angle Glaucoma
AWADALLA, MONA S.; FINGERT, JOHN H.; ROOS, BENJAMIN E.; CHEN, SIMON; HOLMES, RICHARD; GRAHAM, STUART L.; CHEHADE, MARK; GALANOPOLOUS, ANNA; RIDGE, BRONWYN; SOUZEAU, EMMANUELLE; ZHOU, TIGER; SIGGS, OWEN M.; HEWITT, ALEX W.; MACKEY, DAVID A.; BURDON, KATHRYN P.; CRAIG, JAMIE E.
2015-01-01
PURPOSE To investigate the presence of TBK1 copy number variations in a large, well-characterized Australian cohort of patients with glaucoma comprising both normal-tension glaucoma and high-tension glaucoma cases. DESIGN A retrospective cohort study. METHODS DNA samples from patients with normal-tension glaucoma and high-tension glaucoma and unaffected controls were screened for TBK1 copy number variations using real-time quantitative polymerase chain reaction. Samples with additional copies of the TBK1 gene were further tested using custom comparative genomic hybridization arrays. RESULTS Four out of 334 normal-tension glaucoma cases (1.2%) were found to carry TBK1 copy number variations using quantitative polymerase chain reaction. One extra dose of the TBK1 gene (duplication) was detected in 3 normal-tension glaucoma patients, while 2 extra doses of the gene (triplication) were detected in a fourth normal-tension glaucoma patient. The results were further confirmed by custom comparative genomic hybridization arrays. Further, the TBK1 copy number variation segregated with normal-tension glaucoma in the family members of the probands, showing an autosomal dominant pattern of inheritance. No TBK1 copy number variations were detected in 1045 Australian patients with high-tension glaucoma or in 254 unaffected controls. CONCLUSION We report the presence of TBK1 copy number variations in our Australian normal-tension glaucoma cohort, including the first example of more than 1 extra copy of this gene in glaucoma patients (gene triplication). These results confirm TBK1 to be an important cause of normal-tension glaucoma, but do not suggest common involvement in high-tension glaucoma. PMID:25284765
Annual variation in the atmospheric radon concentration in Japan.
Kobayashi, Yuka; Yasuoka, Yumi; Omori, Yasutaka; Nagahama, Hiroyuki; Sanada, Tetsuya; Muto, Jun; Suzuki, Toshiyuki; Homma, Yoshimi; Ihara, Hayato; Kubota, Kazuhito; Mukai, Takahiro
2015-08-01
Anomalous atmospheric variations in radon related to earthquakes have been observed in hourly exhaust-monitoring data from radioisotope institutes in Japan. The extraction of seismic anomalous radon variations would be greatly aided by understanding the normal pattern of variation in radon concentrations. Using atmospheric daily minimum radon concentration data from five sampling sites, we show that a sinusoidal regression curve can be fitted to the data. In addition, we identify areas where the atmospheric radon variation is significantly affected by the variation in atmospheric turbulence and the onshore-offshore pattern of Asian monsoons. Furthermore, by comparing the sinusoidal regression curve for the normal annual (seasonal) variations at the five sites to the sinusoidal regression curve for a previously published dataset of radon values at the five Japanese prefectures, we can estimate the normal annual variation pattern. By fitting sinusoidal regression curves to the previously published dataset containing sites in all Japanese prefectures, we find that 72% of the Japanese prefectures satisfy the requirements of the sinusoidal regression curve pattern. Using the normal annual variation pattern of atmospheric daily minimum radon concentration data, these prefectures are suitable areas for obtaining anomalous radon variations related to earthquakes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
2011-01-01
Background Most information on genomic variations and their associations with phenotypes are covered exclusively in scientific publications rather than in structured databases. These texts commonly describe variations using natural language; database identifiers are seldom mentioned. This complicates the retrieval of variations, associated articles, as well as information extraction, e. g. the search for biological implications. To overcome these challenges, procedures to map textual mentions of variations to database identifiers need to be developed. Results This article describes a workflow for normalization of variation mentions, i.e. the association of them to unique database identifiers. Common pitfalls in the interpretation of single nucleotide polymorphism (SNP) mentions are highlighted and discussed. The developed normalization procedure achieves a precision of 98.1 % and a recall of 67.5% for unambiguous association of variation mentions with dbSNP identifiers on a text corpus based on 296 MEDLINE abstracts containing 527 mentions of SNPs. The annotated corpus is freely available at http://www.scai.fraunhofer.de/snp-normalization-corpus.html. Conclusions Comparable approaches usually focus on variations mentioned on the protein sequence and neglect problems for other SNP mentions. The results presented here indicate that normalizing SNPs described on DNA level is more difficult than the normalization of SNPs described on protein level. The challenges associated with normalization are exemplified with ambiguities and errors, which occur in this corpus. PMID:21992066
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Glavatskiy, K S
2015-10-28
Validity of local equilibrium has been questioned for non-equilibrium systems which are characterized by delayed response. In particular, for systems with non-zero thermodynamic inertia, the assumption of local equilibrium leads to negative values of the entropy production, which is in contradiction with the second law of thermodynamics. In this paper, we address this question by suggesting a variational formulation of irreversible evolution of a system with non-zero thermodynamic inertia. We introduce the Lagrangian, which depends on the properties of the normal and the so-called "mirror-image" systems. We show that the standard evolution equations, in particular, the Maxwell-Cattaneo-Vernotte equation, can be derived from the variational procedure without going beyond the assumption of local equilibrium. We also argue that the second law of thermodynamics in non-equilibrium should be understood as a consequence of the variational procedure and the property of local equilibrium. For systems with instantaneous response this leads to the standard requirement of the local instantaneous entropy production being always positive. However, if a system is characterized by delayed response, the formulation of the second law of thermodynamics should be altered. In particular, the quantity, which is always positive, is not the instantaneous entropy production, but the entropy production averaged over a proper time interval.
Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.
Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge
2017-02-22
Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Srivastava, Viraj; Wang, Na
Weather normalization is a crucial task in several applications related to building energy conservation such as retrofit measurements and energy rating. This paper documents preliminary results found from an effort to determine a set of weather adjustment coefficients that can be used to smooth out impacts of weather on energy use of buildings in 1020 weather location sites available in the U.S. The U.S. Department of Energy (DOE) commercial reference building models are adopted as hypothetical models with standard operations to deliver consistency in modeling. The correlation between building envelop design, HVAC system design and properties for different building typesmore » and the change in heating and cooling energy consumption caused by variations in weather is examined.« less
Fortier, T M; Ashby, N; Bergquist, J C; Delaney, M J; Diddams, S A; Heavner, T P; Hollberg, L; Itano, W M; Jefferts, S R; Kim, K; Levi, F; Lorini, L; Oskay, W H; Parker, T E; Shirley, J; Stalnaker, J E
2007-02-16
We report tests of local position invariance and the variation of fundamental constants from measurements of the frequency ratio of the 282-nm 199Hg+ optical clock transition to the ground state hyperfine splitting in 133Cs. Analysis of the frequency ratio of the two clocks, extending over 6 yr at NIST, is used to place a limit on its fractional variation of <5.8x10(-6) per change in normalized solar gravitational potential. The same frequency ratio is also used to obtain 20-fold improvement over previous limits on the fractional variation of the fine structure constant of |alpha/alpha|<1.3x10(-16) yr-1, assuming invariance of other fundamental constants. Comparisons of our results with those previously reported for the absolute optical frequency measurements in H and 171Yb+ vs other 133Cs standards yield a coupled constraint of -1.5x10(-15)
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Confidence bounds and hypothesis tests for normal distribution coefficients of variation
Steve P. Verrill; Richard A. Johnson
2007-01-01
For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...
Coplen, Tyler B.; Wassenaar, Leonard I
2015-01-01
Although laser absorption spectrometry (LAS) instrumentation is easy to use, its incorporation into laboratory operations is not easy, owing to extensive offline manipulation of comma-separated-values files for outlier detection, between-sample memory correction, nonlinearity (δ-variation with water amount) correction, drift correction, normalization to VSMOW-SLAP scales, and difficulty in performing long-term QA/QC audits. METHODS: A Microsoft Access relational-database application, LIMS (Laboratory Information Management System) for Lasers 2015, was developed. It automates LAS data corrections and manages clients, projects, samples, instrument-sample lists, and triple-isotope (δ(17) O, δ(18) O, and δ(2) H values) instrumental data for liquid-water samples. It enables users to (1) graphically evaluate sample injections for variable water yields and high isotope-delta variance; (2) correct for between-sample carryover, instrumental drift, and δ nonlinearity; and (3) normalize final results to VSMOW-SLAP scales. RESULTS: Cost-free LIMS for Lasers 2015 enables users to obtain improved δ(17) O, δ(18) O, and δ(2) H values with liquid-water LAS instruments, even those with under-performing syringes. For example, LAS δ(2) HVSMOW measurements of USGS50 Lake Kyoga (Uganda) water using an under-performing syringe having ±10 % variation in water concentration gave +31.7 ± 1.6 ‰ (2-σ standard deviation), compared with the reference value of +32.8 ± 0.4 ‰, after correction for variation in δ value with water concentration, between-sample memory, and normalization to the VSMOW-SLAP scale. CONCLUSIONS: LIMS for Lasers 2015 enables users to create systematic, well-founded instrument templates, import δ(2) H, δ(17) O, and δ(18) O results, evaluate performance with automatic graphical plots, correct for δ nonlinearity due to variable water concentration, correct for between-sample memory, adjust for drift, perform VSMOW-SLAP normalization, and perform long-term QA/QC audits easily.
Detailed prospective peer review in a community radiation oncology clinic.
Mitchell, James D; Chesnut, Thomas J; Eastham, David V; Demandante, Carlo N; Hoopes, David J
In 2012, we instituted detailed prospective peer review of new cases. We present the outcomes of peer review on patient management and time required for peer review. Peer review rounds were held 3 to 4 days weekly and required 2 physicians to review pertinent information from the electronic medical record and treatment planning system. Eight aspects were reviewed for each case: 1) workup and staging; 2) treatment intent and prescription; 3) position, immobilization, and simulation; 4) motion assessment and management; 5) target contours; 6) normal tissue contours; 7) target dosimetry; and 8) normal tissue dosimetry. Cases were marked as, "Meets standard of care," "Variation," or "Major deviation." Changes in treatment plan were noted. As our process evolved, we recorded the time spent reviewing each case. From 2012 to 2014, we collected peer review data on 442 of 465 (95%) radiation therapy patients treated in our hospital-based clinic. Overall, 91 (20.6%) of the cases were marked as having a variation, and 3 (0.7%) as major deviation. Forty-two (9.5%) of the cases were altered after peer review. An overall peer review score of "Variation" or "Major deviation" was highly associated with a change in treatment plan (P < .01). Changes in target contours were recommended in 10% of cases. Gastrointestinal cases were significantly associated with a change in treatment plan after peer review. Indicators on position, immobilization, simulation, target contours, target dosimetry, motion management, normal tissue contours, and normal tissue dosimetry were significantly associated with a change in treatment plan. The mean time spent on each case was 7 minutes. Prospective peer review is feasible in a community radiation oncology practice. Our process led to changes in 9.5% of cases. Peer review should focus on technical factors such as target contours and dosimetry. Peer review required 7 minutes per case. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.
2010-03-01
The purpose of this work was to retrospectively investigate the variation of standard indices of pulmonary emphysema from helical computed tomographic (CT) scans as related to inspiration differences over a 1 year interval and determine the strength of the relationship between these measures in a large cohort. 626 patients that had 2 scans taken at an interval of 9 months to 15 months (μ: 381 days, σ: 31 days) were selected for this work. All scans were acquired at a 1.25mm slice thickness using a low dose protocol. For each scan, the emphysema index (EI), fractal dimension (FD), mean lung density (MLD), and 15th percentile of the histogram (HIST) were computed. The absolute and relative changes for each measure were computed and the empirical 95% confidence interval was reported both in non-normalized and normalized scales. Spearman correlation coefficients are computed between the relative change in each measure and relative change in inspiration between each scan-pair, as well as between each pair-wise combination of the four measures. EI varied on a range of -10.5 to 10.5 on a non-normalized scale and -15 to 15 on a normalized scale, with FD and MLD showing slightly larger but comparable spreads, and HIST having a much larger variation. MLD was found to show the strongest correlation to inspiration change (r=0.85, p<0.001), and EI, FD, and HIST to have moderately strong correlation (r = 0.61-0.74, p<0.001). Finally, HIST showed very strong correlation to EI (r = 0.92, p<0.001), while FD showed the least strong relationship to EI (r = 0.82, p<0.001). This work shows that emphysema index and fractal dimension have the least variability overall of the commonly used measures of emphysema and that they offer the most unique quantification of emphysema relative to each other.
Nur, Nurhayati Mohd; Dawal, Siti Zawiah Md; Dahari, Mahidzal; Sanusi, Junedah
2015-01-01
[Purpose] This study investigated the variations in muscle fatigue, time to fatigue, and maximum task duration at different levels of production standard time. [Methods] Twenty subjects performed repetitive tasks at three different levels of production standard time corresponding to “normal”, “hard” and “very hard”. Surface electromyography was used to measure the muscle activity. [Results] The results showed that muscle activity was significantly affected by the production standard time level. Muscle activity increased twice in percentage as the production standard time shifted from hard to very hard (6.9% vs. 12.9%). The muscle activity increased over time, indicating muscle fatigue. The muscle fatigue rate increased for the harder production standard time (Hard: 0.105; Very hard: 0.115), which indicated the associated higher risk of work-related musculoskeletal disorders. Muscle fatigue was also found to occur earlier for hard and very hard production standard times. [Conclusion] It is recommended that the maximum task duration should not exceed 5.6, 2.9, and 2.2 hours for normal, hard, and very hard production standard times, respectively, in order to maintain work performance and minimize the risk of work-related musculoskeletal disorders. PMID:26311974
Geometric constrained variational calculus I: Piecewise smooth extremals
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2015-05-01
A geometric setup for constrained variational calculus is presented. The analysis deals with the study of the extremals of an action functional defined on piecewise differentiable curves, subject to differentiable, non-holonomic constraints. Special attention is paid to the tensorial aspects of the theory. As far as the kinematical foundations are concerned, a fully covariant scheme is developed through the introduction of the concept of infinitesimal control. The standard classification of the extremals into normal and abnormal ones is discussed, pointing out the existence of an algebraic algorithm assigning to each admissible curve a corresponding abnormality index, related to the co-rank of a suitable linear map. Attention is then shifted to the study of the first variation of the action functional. The analysis includes a revisitation of Pontryagin's equations and of the Lagrange multipliers method, as well as a reformulation of Pontryagin's algorithm in Hamiltonian terms. The analysis is completed by a general result, concerning the existence of finite deformations with fixed endpoints.
Chen, Hong; Wang, Wen-jun; Chen, Yu-zhen; Mai, Mei-qi; Ouyang, Neng-yong; Chen, Jing-hua; Tuo, Ping
2010-05-01
To investigate the impacts of body mass index (BMI) and age on in vitro fertilization-embryo transfer (IVF) and intracytoplasmic sperm injection (ICSI) treatment in infertile patients without polycystic ovary syndrome (PCOS). A retrospective study of 1426 patients during Jun. 2001 - Nov. 2009 was carried out. Multiple regression was used to analyze the effects of BMI (low weight: BMI < 18.5 kg/m(2), normal weight: BMI 18.5 - 23.99 kg/m(2) and over weight-obesity: BMI ≥ 24 kg/m(2)) and age (young: 20 - 34 years old, eld: 35 - 45 years old) on controlled ovarian stimulation (COH) [including: dose and duration of Gn, E2 level on day of human chorionic gonadotropin (HCG) administration, number of oocytes collected and full-grown follicles], number of fertilization, cleavage, two-pronucleus, normal embryos and cryopreserved embryos and clinical pregnancy outcome. (1) Gn dose for the patients whose age were 35 and the above, had a positive correlation with age (P < 0.001), 12.70% of the total variation of Gn dose was related to age (standardized partial regression coefficient was 0.343). (2) Estradiol level on day of HCG administration had a negative correlation with BMI in overweight-obesity patients, and so were the patients whose age were 35 and above (P value respectively lower than 0.037 and 0.018). 0.80% of the total variation of estradiol (HCG day) is related to age and overweight-obesity while age took greater proportion (standardized partial regression coefficients were 0.066 and 0.058 respectively). (3) For older patients, age appeared to have negative relationships with duration of Gn and number of oocytes collected, full-grown follicles, fertilization, cleavage, two-pronucleus, normal embryos and cryopreserved embryos (P < 0.05). (4) Compared to young-normal weight patients, the odds ratio of pregnancy in eld-low weight and eld-overweight-obesity patients were 0.482 and 0.529 (P < 0.05) respectively. Age, but not the BMI, had significant effects on IVF/ICSI treatment. It seems that factors as losing weight before IVF or ICSI treatment effective in reducing the dose of Gn.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Assessing the value of diagnostic imaging: the role of perception
NASA Astrophysics Data System (ADS)
Potchen, E. J.; Cooper, Thomas G.
2000-04-01
The value of diagnostic radiology rests in its ability to provide information. Information is defined as a reduction in randomness. Quality improvement in any system requires diminution in the variation in its performance. The major variation in performance of the system of diagnostic radiology occurs in observer performance and in the communication of information from the observer to someone who will apply that information to the benefit of the patient. The ability to provide information can be determined by observer performance studies using a receiver-operating characteristic (ROC) curve analysis. The amount of information provided by each observer can be measured in terms of the uncertainty they reduce. Using a set of standardized radiographs, some normal and some abnormal, sorting them randomly, and then asking an observer to redistribute them according to their probability of normality can measure the difference in the value added by different observers. By applying this observer performance measure, we have been able to characterize individual radiologists, groups of radiologists, and regions of the United States in their ability to add value in chest radiology. The use of these technologies in health care may improve upon the contribution of diagnostic imaging.
Neocortical malformation as consequence of nonadaptive regulation of neuronogenetic sequence
NASA Technical Reports Server (NTRS)
Caviness, V. S. Jr; Takahashi, T.; Nowakowski, R. S.
2000-01-01
Variations in the structure of the neocortex induced by single gene mutations may be extreme or subtle. They differ from variations in neocortical structure encountered across and within species in that these "normal" structural variations are adaptive (both structurally and behaviorally), whereas those associated with disorders of development are not. Here we propose that they also differ in principle in that they represent disruptions of molecular mechanisms that are not normally regulatory to variations in the histogenetic sequence. We propose an algorithm for the operation of the neuronogenetic sequence in relation to the overall neocortical histogenetic sequence and highlight the restriction point of the G1 phase of the cell cycle as the master regulatory control point for normal coordinate structural variation across species and importantly within species. From considerations based on the anatomic evidence from neocortical malformation in humans, we illustrate in principle how this overall sequence appears to be disrupted by molecular biological linkages operating principally outside the control mechanisms responsible for the normal structural variation of the neocortex. MRDD Research Reviews 6:22-33, 2000. Copyright 2000 Wiley-Liss, Inc.
Rønjom, Marianne F; Brink, Carsten; Lorenzen, Ebbe L; Hegedüs, Laszlo; Johansen, Jørgen
2015-01-01
To examine the variations of risk-estimates of radiation-induced hypothyroidism (HT) from our previously developed normal tissue complication probability (NTCP) model in patients with head and neck squamous cell carcinoma (HNSCC) in relation to variability of delineation of the thyroid gland. In a previous study for development of an NTCP model for HT, the thyroid gland was delineated in 246 treatment plans of patients with HNSCC. Fifty of these plans were randomly chosen for re-delineation for a study of the intra- and inter-observer variability of thyroid volume, Dmean and estimated risk of HT. Bland-Altman plots were used for assessment of the systematic (mean) and random [standard deviation (SD)] variability of the three parameters, and a method for displaying the spatial variation in delineation differences was developed. Intra-observer variability resulted in a mean difference in thyroid volume and Dmean of 0.4 cm(3) (SD ± 1.6) and -0.5 Gy (SD ± 1.0), respectively, and 0.3 cm(3) (SD ± 1.8) and 0.0 Gy (SD ± 1.3) for inter-observer variability. The corresponding mean differences of NTCP values for radiation-induced HT due to intra- and inter-observer variations were insignificantly small, -0.4% (SD ± 6.0) and -0.7% (SD ± 4.8), respectively, but as the SDs show, for some patients the difference in estimated NTCP was large. For the entire study population, the variation in predicted risk of radiation-induced HT in head and neck cancer was small and our NTCP model was robust against observer variations in delineation of the thyroid gland. However, for the individual patient, there may be large differences in estimated risk which calls for precise delineation of the thyroid gland to obtain correct dose and NTCP estimates for optimized treatment planning in the individual patient.
Biological and Cultural Diversity: The Legacy of Darwin for Development.
ERIC Educational Resources Information Center
Scarr, Sandra
1993-01-01
Posits that an evolutionary perspective can unite the study of the typical development for and individual variation within a species and that environments within the normal range for a species are required for species-normal development. Individual differences in children reared in normal environments arise primarily from genetic variation and…
Eye Dominance Predicts fMRI Signals in Human Retinotopic Cortex
Mendola, Janine D.; Conner, Ian P.
2009-01-01
There have been many attempts to define eye dominance in normal subjects, but limited consensus exists, and relevant physiological data is scarce. In this study, we consider two different behavioral methods for assignment of eye dominance, and how well they predict fMRI signals evoked by monocular stimulation. Sighting eye dominance was assessed with two standard tests, the Porta Test, and a ‘hole in hand’ variation of the Miles Test. Acuity dominance was tested with a standard eye chart and with a computerized test of grating acuity. We found limited agreement between the sighting and acuity methods for assigning dominance in our individual subjects. We then compared the fMRI response generated by dominant eye stimulation to that generated by non-dominant eye, according to both methods, in 7 normal subjects. The stimulus consisted of a high contrast hemifield stimulus alternating with no stimulus in a blocked paradigm. In separate scans, we used standard techniques to label the borders of visual areas V1, V2, V3, VP, V4, V3A, and MT. These regions of interest (ROIs) were used to analyze each visual area separately. We found that percent change in fMRI BOLD signal was stronger for the dominant eye as defined by the acuity method, and this effect was significant for areas located in the ventral occipital territory (V1v, V2v, VP, V4). In contrast, assigning dominance based on sighting produced no significant interocular BOLD differences. We conclude that interocular BOLD differences in normal subjects exist, and may be predicted by acuity measures. PMID:17194544
Avitan, Tehila; Sanders, Ari; Brain, Ursula; Rurak, Dan; Oberlander, Tim F; Lim, Ken
2018-05-01
To determine if there are changes in maternal uterine blood flow, fetal brain blood flow, fetal heart rate variability, and umbilical blood flow between morning (AM) and afternoon (PM) in healthy, uncomplicated pregnancies. In this prospective study, 68 uncomplicated singleton pregnancies (mean 35 + 0.7 weeks gestation) underwent a standard observational protocol at both 08:00 (AM) and 13:30 (PM) of the same day. This protocol included Doppler measurements of uterine, umbilical, and fetal middle cerebral artery (MCA) volume flow parameters (flow, HR, peak systolic velocity [PSV], PI, and RI) followed by computerized cardiotocography. Standard descriptive statistics, χ 2 and t tests were used where appropriate. P < .05 was considered significant. A significant increase in MCA flow and MCA PSV was observed in the PM compared to the AM. This was accompanied by a fall in MCA resistance. Higher umbilical artery resistance indices were also observed in the PM compared to AM. In contrast, fetal heart rate characteristics, maternal uterine artery Doppler flow and resistance indices did not vary significantly between the AM and PM. In normal pregnancies, variations in fetal cerebral and umbilical blood flow parameters were observed between AM and PM independent of other fetal movements or baseline fetal heart rate. In contrast, uterine flow parameters remained stable across the day. These findings may have implications for the use of serial Doppler parameters used to guide clinical management in high-risk pregnancies. © 2017 Wiley Periodicals, Inc.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
[Nuclear medicine diagnosis of pulmonary capillary protein leakage].
Creutzig, H; Sturm, J A; Schober, O; Nerlich, M L; Kant, C J
1984-10-01
Pulmonary extravascular albumin extravasation in patients with adult respiratory distress syndrome can be quantified with radionuclide techniques. While imaging procedures with a computerized gamma camera will allow reproducible ROIs, this will be the main limitation in nonimaging measurements with small scintillation probes. Repeated positioning by one operator results in a mean spatial variation of position of about 2 cm and a variation in count rate of 25%. For the estimation of PCPL the small probes must be positioned under scintigraphic control. Under these conditions the results of both techniques are identical. The upper limit of normal was estimated to be 1 x E-5/sec. The standard deviation of abnormal measurements was about 10%. The pulmonary capillary protein leakage can be quantified by radionuclide techniques with good accuracy, using the combination of imaging and nonimaging techniques.
NASA Technical Reports Server (NTRS)
Kurtenbach, F. J.
1979-01-01
The technique which relies on afterburner duct pressure measurements and empirical corrections to an ideal one dimensional flow analysis to determine thrust is presented. A comparison of the calculated and facility measured thrust values is reported. The simplified model with the engine manufacturer's gas generator model are compared. The evaluation was conducted over a range of Mach numbers from 0.80 to 2.00 and at altitudes from 4020 meters to 15,240 meters. The effects of variations in inlet total temperature from standard day conditions were explored. Engine conditions were varied from those normally scheduled for flight. The technique was found to be accurate to a twice standard deviation of 2.89 percent, with accuracy a strong function of afterburner duct pressure difference.
Serum Creatinine: Not So Simple!
Delanaye, Pierre; Cavalier, Etienne; Pottel, Hans
2017-01-01
Measuring serum creatinine is cheap and commonly done in daily practice. However, interpretation of serum creatinine results is not always easy. In this review, we will briefly remind the physiological limitations of serum creatinine due notably to its tubular secretion and the influence of muscular mass or protein intake on its concentration. We mainly focus on the analytical limitations of serum creatinine, insisting on important concept such as reference intervals, standardization (and IDMS traceability), analytical interferences, analytical coefficient of variation (CV), biological CV and critical difference. Because the relationship between serum creatinine and glomerular filtration rate is hyperbolic, all these CVs will impact not only the precision of serum creatinine but still more the precision of different creatinine-based equations, especially in low or normal-low creatinine levels (or high or normal-high glomerular filtration rate range). © 2017 S. Karger AG, Basel.
siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens.
Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A; Kim, Hyun Seok
2013-03-01
Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute.
siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens
Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A.
2013-01-01
Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute. PMID:23613684
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glavatskiy, K. S.
Validity of local equilibrium has been questioned for non-equilibrium systems which are characterized by delayed response. In particular, for systems with non-zero thermodynamic inertia, the assumption of local equilibrium leads to negative values of the entropy production, which is in contradiction with the second law of thermodynamics. In this paper, we address this question by suggesting a variational formulation of irreversible evolution of a system with non-zero thermodynamic inertia. We introduce the Lagrangian, which depends on the properties of the normal and the so-called “mirror-image” systems. We show that the standard evolution equations, in particular, the Maxwell-Cattaneo-Vernotte equation, can bemore » derived from the variational procedure without going beyond the assumption of local equilibrium. We also argue that the second law of thermodynamics in non-equilibrium should be understood as a consequence of the variational procedure and the property of local equilibrium. For systems with instantaneous response this leads to the standard requirement of the local instantaneous entropy production being always positive. However, if a system is characterized by delayed response, the formulation of the second law of thermodynamics should be altered. In particular, the quantity, which is always positive, is not the instantaneous entropy production, but the entropy production averaged over a proper time interval.« less
NASA Astrophysics Data System (ADS)
Ytsma, Cai R.; Dyar, M. Darby
2018-01-01
Hydrogen (H) is a critical element to measure on the surface of Mars because its presence in mineral structures is indicative of past hydrous conditions. The Curiosity rover uses the laser-induced breakdown spectrometer (LIBS) on the ChemCam instrument to analyze rocks for their H emission signal at 656.6 nm, from which H can be quantified. Previous LIBS calibrations for H used small data sets measured on standards and/or manufactured mixtures of hydrous minerals and rocks and applied univariate regression to spectra normalized in a variety of ways. However, matrix effects common to LIBS make these calibrations of limited usefulness when applied to the broad range of compositions on the Martian surface. In this study, 198 naturally-occurring hydrous geological samples covering a broad range of bulk compositions with directly-measured H content are used to create more robust prediction models for measuring H in LIBS data acquired under Mars conditions. Both univariate and multivariate prediction models, including partial least square (PLS) and the least absolute shrinkage and selection operator (Lasso), are compared using several different methods for normalization of H peak intensities. Data from the ChemLIBS Mars-analog spectrometer at Mount Holyoke College are compared against spectra from the same samples acquired using a ChemCam-like instrument at Los Alamos National Laboratory and the ChemCam instrument on Mars. Results show that all current normalization and data preprocessing variations for quantifying H result in models with statistically indistinguishable prediction errors (accuracies) ca. ± 1.5 weight percent (wt%) H2O, limiting the applications of LIBS in these implementations for geological studies. This error is too large to allow distinctions among the most common hydrous phases (basalts, amphiboles, micas) to be made, though some clays (e.g., chlorites with ≈ 12 wt% H2O, smectites with 15-20 wt% H2O) and hydrated phases (e.g., gypsum with ≈ 20 wt% H2O) may be differentiated from lower-H phases within the known errors. Analyses of the H emission peak in Curiosity calibration targets and rock and soil targets on the Martian surface suggest that shot-to-shot variations of the ChemCam laser on Mars lead to variations in intensity that are comparable to those represented by the breadth of H standards tested in this study.
Brief psycho-education affects circadian variability in nicotine craving during cessation.
Nosen, Elizabeth; Woody, Sheila R
2013-09-01
Nicotine cravings are a key target of smoking cessation interventions. Cravings demonstrate circadian variation during abstinence, often peaking during the morning and evening hours. Although some research has also shown diurnal variation in the efficacy of nicotine replacement medications, little research has examined how brief psychosocial interventions affect temporal patterns of craving during abstinence. The present study examined the impact of two brief psycho-education interventions on circadian variations in cravings during a 24-h period. 176 adult smokers interested in quitting participated in two lab sessions. During the first session, participants received (a) mindfulness psycho-education that encouraged acceptance of cravings as a normal, tolerable part of quitting that people should not expect to perfectly control, (b) standard cessation psycho-education, or (c) no psycho-education. Half the sample initiated a cessation attempt the following day. Dependent variables were assessed using ecological momentary assessment (24-h of monitoring, immediately after first lab session) and questionnaires four days later. Partially consistent with hypotheses, both forms of psycho-education were associated with differential diurnal variation in cravings during cessation. Relative to those receiving no psycho-education, standard smoking cessation psycho-education decreased morning cravings. Psycho-education encouraging acceptance of cravings was associated with lower craving in both the morning and evening, albeit only among successfully abstinent smokers. Results demonstrate that brief non-pharmacological interventions can affect circadian craving patterns during smoking cessation. Further investigation of mechanisms of change and of the impact of psycho-education on cessation outcomes is warranted. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Comparison of three commercially available ektacytometers with different shearing geometries.
Baskurt, Oguz K; Hardeman, M R; Uyuklu, Mehmet; Ulker, Pinar; Cengiz, Melike; Nemeth, Norbert; Shin, Sehyun; Alexy, Tamas; Meiselman, Herbert J
2009-01-01
In December 2008, the International Society for Clinical Hemorheology organized a workshop to evaluate and compare three ektacytometer instruments for measuring deformability of red blood cells (RBC): LORCA (Laser-assisted Optical Rotational Cell Analyzer, RR Mechatronics, Hoorn, The Netherlands), Rheodyn SSD (Myrenne GmbH, Roetgen, Germany) and RheoScan-D (RheoMeditech, Seoul, Korea). Intra-assay reproducibility and biological variation were determined using normal RBC, and cells with reduced deformability (i.e., 0.001-0.02% glutaradehyde (GA), 48 degrees C heat treatment) were employed as either the only RBC present or as a sub-population. Standardized difference values were used as measure of the power to detect differences between normal and treated cells. Salient results include: (1) All instruments had intra-assay variations below 5% for shear stress (SS)>1 Pa but a sharp increase was found for Rheodyn SSD and RheoScan-D at lower SS; (2) Biological variation was similar and markedly increased for SS<3-5 Pa; (3) All instruments detected GA-treated RBC with maximal power at 1-3 Pa, the presence of 10% or 40% GA-modified cells, and the effects of heat treatment. It is concluded that the LORCA, Rheodyn SSD and RheoScan-D all have acceptable precision and power for detecting reduced RBC deformability due to GA treatment or heat treatment, and that the SS range selected for the measurement of deformability is an important determinant of an instrument's power.
Fuentes, Ramón; Engelke, Wilfried; Flores, Tania; Navarro, Pablo; Borie, Eduardo; Curiqueo, Aldo; Salamanca, Carlos
2015-01-01
Under normal conditions, the oral cavity presents a perfect system of equilibrium between teeth, soft tissues and tongue. The equilibrium of soft tissues forms a closed capsular matrix, generating differences with the atmospheric environment. This difference is known as intraoral pressure. Negative intraoral pressure is fundamental to the stabilization of the soft palate and tongue, reducing neuromuscular activity for the permeability of the respiratory tract. Thus, the aim of this study was to describe the variations of intraoral pressure of the sub-palatal space (SPS) under different physiological conditions and biofunctional phases. A case series was conducted with 20 individuals aged between 18 and 25. The intraoral pressures were measured through a system of cannulae connected to a digital pressure meter in the SPS during seven biofunctional phases. Descriptive statistics were used based on the mean and standard deviation. The data recorded pressure variations under physiological conditions, reaching 65 mbar as the intraoral peak in forced inspiration. In the swallowing phase, peaks reached -91.9 mbar. No pressure variations were recorded in terms of atmospheric changes with the mouth open and semi-open. The data obtained during the swallowing and forced inspiration phases indicated forced lingual activity. In the swallowing phase, the adequate position of the tongue creates negative intraoral pressure, which represents a fundamental mechanism for the physical stabilization of the soft palate. This information could contribute to subsequent research into the treatment of primary roncopathies.
Youn, Sang Woong; Na, Jung Im; Choi, Sun Young; Huh, Chang Hun; Park, Kyoung Chan
2005-08-01
Facial sebum secretions are known to change under various circumstances. Facial skin types have been categorized as oily, normal, dry, and combination types. However, these have been evaluated subjectively by individuals to date, and no objective accepted standard measurement method exists. The combination skin type is most common, but its definition is vaguer than the definitions of the other skin types. We measured facial sebum secretions with Sebumeter. Sebum secretions were measured at five sites of the face seasonally for a year, in the same volunteers. Using the data obtained we developed a set of rules to define the combination skin type. Regional differences in sebum secretion were confirmed. Sebum secretions on forehead, nose, and chin were higher than on both cheeks. Summer was found to be the highest sebum-secreting season, and seasonal variations were found in the T- and U-zones. A mismatch of skin type in the T- and U-zones in more than two seasons appears to be close to subjective ratings of what is described as the 'combination' skin type. We showed that the face shows definitive regional and seasonal variations in sebum secretion. To define the combination skin type, seasonal variations in sebum secretion should be considered in addition to regional variations.
SU-F-BRE-14: Uncertainty Analysis for Dose Measurements Using OSLD NanoDots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Alvarez, P; Stingo, F
2014-06-15
Purpose: Optically stimulated luminescent dosimeters (OSLD) are an increasingly popular dosimeter for research and clinical applications. It is also used by the Radiological Physics Center for remote auditing of machine output. In this work we robustly calculated the reproducibility and uncertainty of the OSLD nanoDot. Methods: For the RPC dose calculation, raw readings are corrected for depletion, element sensitivity, fading, linearity, and energy. System calibration is determined for the experimental OSLD irradiated at different institutions by using OSLD irradiated by the RPC under reference conditions (i.e., standards): 1 Gy in a Cobalt beam. The intra-dot and inter-dot reproducibilities (coefficient ofmore » variation) were determined from the history of RPC readings of these standards. The standard deviation of the corrected OSLD signal was then calculated analytically using a recursive formalism that did not rely on the normality assumption of the underlying uncertainties, or on any type of mathematical approximation. This analytical uncertainty was compared to that empirically estimated from >45,000 RPC beam audits. Results: The intra-dot variability was found to be 0.59%, with only a small variation between readers. Inter-dot variability was found to be 0.85%. The uncertainty in each of the individual correction factors was empirically determined. When the raw counts from each OSLD were adjusted for the appropriate correction factors, the analytically determined coefficient of variation was 1.8% over a range of institutional irradiation conditions that are seen at the RPC. This is reasonably consistent with the empirical observations of the RPC, where the coefficient of variation of the measured beam outputs is 1.6% (photons) and 1.9% (electrons). Conclusion: OSLD nanoDots provide sufficiently good precision for a wide range of applications, including the RPC remote monitoring program for megavoltage beams. This work was supported by PHS grant CA10953 awarded by the NIH (DHHS)« less
NASA Technical Reports Server (NTRS)
Ellis, David L.
2007-01-01
Room temperature tensile testing of Chemically Pure (CP) Titanium Grade 2 was conducted for as-received commercially produced sheet and following thermal exposure at 550 and 650 K for times up to 5,000 h. No significant changes in microstructure or failure mechanism were observed. A statistical analysis of the data was performed. Small statistical differences were found, but all properties were well above minimum values for CP Ti Grade 2 as defined by ASTM standards and likely would fall within normal variation of the material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zubkova, S.R.; Chernavskaya, N.M.
1959-06-11
It was found that a single lethal dose (1000 r) changes the cholinesterase activity in the brain, liver, and blood serum. After 5 hr and 45 min the cholinesterase activity in tissues drops from the normal level (15.9% in blood serum, 20.6% in the brain, and 18.4% in the liver). After three days the activity changes in various tissues: in the liver it continues to drop, in the brain it rises but does not reach the standard level, and it increases sharply in the blood serum. (R.V.J.)
Creation of three-dimensional craniofacial standards from CBCT images
NASA Astrophysics Data System (ADS)
Subramanyan, Krishna; Palomo, Martin; Hans, Mark
2006-03-01
Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.
Chest Radiograph Findings in Childhood Pneumonia Cases From the Multisite PERCH Study
Deloria Knoll, Maria; Baggett, Henry C.; Brooks, W. Abdullah; Feikin, Daniel R.; Hammitt, Laura L.; Howie, Stephen R. C.; Kotloff, Karen L.; Levine, Orin S.; Madhi, Shabir A.; Murdoch, David R.; Scott, J. Anthony G.; Thea, Donald M.; Awori, Juliet O.; Barger-Kamate, Breanna; Chipeta, James; DeLuca, Andrea N.; Diallo, Mahamadou; Driscoll, Amanda J.; Ebruke, Bernard E.; Higdon, Melissa M.; Jahan, Yasmin; Karron, Ruth A.; Mahomed, Nasreen; Moore, David P.; Nahar, Kamrun; Naorat, Sathapana; Ominde, Micah Silaba; Park, Daniel E.; Prosperi, Christine; wa Somwe, Somwe; Thamthitiwat, Somsak; Zaman, Syed M. A.; Zeger, Scott L.; O’Brien, Katherine L.; O’Brien, Katherine L.; Levine, Orin S.; Knoll, Maria Deloria; Feikin, Daniel R.; DeLuca, Andrea N.; Driscoll, Amanda J.; Fancourt, Nicholas; Fu, Wei; Hammitt, Laura L.; Higdon, Melissa M.; Kagucia, E. Wangeci; Karron, Ruth A.; Li, Mengying; Park, Daniel E.; Prosperi, Christine; Wu, Zhenke; Zeger, Scott L.; Watson, Nora L.; Crawley, Jane; Murdoch, David R.; Brooks, W. Abdullah; Endtz, Hubert P.; Zaman, Khalequ; Goswami, Doli; Hossain, Lokman; Jahan, Yasmin; Ashraf, Hasan; Howie, Stephen R. C.; Ebruke, Bernard E.; Antonio, Martin; McLellan, Jessica; Machuka, Eunice; Shamsul, Arifin; Zaman, Syed M.A.; Mackenzie, Grant; Scott, J. Anthony G.; Awori, Juliet O.; Morpeth, Susan C.; Kamau, Alice; Kazungu, Sidi; Ominde, Micah Silaba; Kotloff, Karen L.; Tapia, Milagritos D.; Sow, Samba O.; Sylla, Mamadou; Tamboura, Boubou; Onwuchekwa, Uma; Kourouma, Nana; Toure, Aliou; Madhi, Shabir A.; Moore, David P.; Adrian, Peter V.; Baillie, Vicky L.; Kuwanda, Locadiah; Mudau, Azwifarwi; Groome, Michelle J.; Mahomed, Nasreen; Baggett, Henry C.; Thamthitiwat, Somsak; Maloney, Susan A.; Bunthi, Charatdao; Rhodes, Julia; Sawatwong, Pongpun; Akarasewi, Pasakorn; Thea, Donald M.; Mwananyanda, Lawrence; Chipeta, James; Seidenberg, Phil; Mwansa, James; wa Somwe, Somwe; Kwenda, Geoffrey
2017-01-01
Abstract Background. Chest radiographs (CXRs) are frequently used to assess pneumonia cases. Variations in CXR appearances between epidemiological settings and their correlation with clinical signs are not well documented. Methods. The Pneumonia Etiology Research for Child Health project enrolled 4232 cases of hospitalized World Health Organization (WHO)–defined severe and very severe pneumonia from 9 sites in 7 countries (Bangladesh, the Gambia, Kenya, Mali, South Africa, Thailand, and Zambia). At admission, each case underwent a standardized assessment of clinical signs and pneumonia risk factors by trained health personnel, and a CXR was taken that was interpreted using the standardized WHO methodology. CXRs were categorized as abnormal (consolidation and/or other infiltrate), normal, or uninterpretable. Results. CXRs were interpretable in 3587 (85%) cases, of which 1935 (54%) were abnormal (site range, 35%–64%). Cases with abnormal CXRs were more likely than those with normal CXRs to have hypoxemia (45% vs 26%), crackles (69% vs 62%), tachypnea (85% vs 80%), or fever (20% vs 16%) and less likely to have wheeze (30% vs 38%; all P < .05). CXR consolidation was associated with a higher case fatality ratio at 30-day follow-up (13.5%) compared to other infiltrate (4.7%) or normal (4.9%) CXRs. Conclusions. Clinically diagnosed pneumonia cases with abnormal CXRs were more likely to have signs typically associated with pneumonia. However, CXR-normal cases were common, and clinical signs considered indicative of pneumonia were present in substantial proportions of these cases. CXR-consolidation cases represent a group with an increased likelihood of death at 30 days post-discharge. PMID:28575361
NASA Astrophysics Data System (ADS)
Oudry, Jennifer; Lynch, Ted; Vappou, Jonathan; Sandrin, Laurent; Miette, Véronique
2014-10-01
Elastographic techniques used in addition to imaging techniques (ultrasound, resonance magnetic or optical) provide new clinical information on the pathological state of soft tissues. However, system-dependent variation in elastographic measurements may limit the clinical utility of these measurements by introducing uncertainty into the measurement. This work is aimed at showing differences in the evaluation of the elastic properties of phantoms performed by four different techniques: quasi-static compression, dynamic mechanical analysis, vibration-controlled transient elastography and hyper-frequency viscoelastic spectroscopy. Four Zerdine® gel materials were tested and formulated to yield a Young’s modulus over the range of normal and cirrhotic liver stiffnesses. The Young’s modulus and the shear wave speed obtained with each technique were compared. Results suggest a bias in elastic property measurement which varies with systems and highlight the difficulty in finding a reference method to determine and assess the elastic properties of tissue-mimicking materials. Additional studies are needed to determine the source of this variation, and control for them so that accurate, reproducible reference standards can be made for the absolute measurement of soft tissue elasticity.
Oudry, Jennifer; Lynch, Ted; Vappou, Jonathan; Sandrin, Laurent; Miette, Véronique
2014-10-07
Elastographic techniques used in addition to imaging techniques (ultrasound, resonance magnetic or optical) provide new clinical information on the pathological state of soft tissues. However, system-dependent variation in elastographic measurements may limit the clinical utility of these measurements by introducing uncertainty into the measurement. This work is aimed at showing differences in the evaluation of the elastic properties of phantoms performed by four different techniques: quasi-static compression, dynamic mechanical analysis, vibration-controlled transient elastography and hyper-frequency viscoelastic spectroscopy. Four Zerdine® gel materials were tested and formulated to yield a Young's modulus over the range of normal and cirrhotic liver stiffnesses. The Young's modulus and the shear wave speed obtained with each technique were compared. Results suggest a bias in elastic property measurement which varies with systems and highlight the difficulty in finding a reference method to determine and assess the elastic properties of tissue-mimicking materials. Additional studies are needed to determine the source of this variation, and control for them so that accurate, reproducible reference standards can be made for the absolute measurement of soft tissue elasticity.
Particle image velocimetry measurements of Mach 3 turbulent boundary layers at low Reynolds numbers
NASA Astrophysics Data System (ADS)
Brooks, J. M.; Gupta, A. K.; Smith, M. S.; Marineau, E. C.
2018-05-01
Particle image velocimetry (PIV) measurements of Mach 3 turbulent boundary layers (TBL) have been performed under low Reynolds number conditions, Re_τ =200{-}1000, typical of direct numerical simulations (DNS). Three reservoir pressures and three measurement locations create an overlap in parameter space at one research facility. This allows us to assess the effects of Reynolds number, particle response and boundary layer thickness separate from facility specific experimental apparatus or methods. The Morkovin-scaled streamwise fluctuating velocity profiles agree well with published experimental and numerical data and show a small standard deviation among the nine test conditions. The wall-normal fluctuating velocity profiles show larger variations which appears to be due to particle lag. Prior to the current study, no detailed experimental study characterizing the effect of Stokes number on attenuating wall-normal fluctuating velocities has been performed. A linear variation is found between the Stokes number ( St) and the relative error in wall-normal fluctuating velocity magnitude (compared to hot wire anemometry data from Klebanoff, Characteristics of Turbulence in a Boundary Layer with Zero Pressure Gradient. Tech. Rep. NACA-TR-1247, National Advisory Committee for Aeronautics, Springfield, Virginia, 1955). The relative error ranges from about 10% for St=0.26 to over 50% for St=1.06. Particle lag and spatial resolution are shown to act as low-pass filters on the fluctuating velocity power spectral densities which limit the measurable energy content. The wall-normal component appears more susceptible to these effects due to the flatter spectrum profile which indicates that there is additional energy at higher wave numbers not measured by PIV. The upstream inclination and spatial correlation extent of coherent turbulent structures agree well with published data including those using krypton tagging velocimetry (KTV) performed at the same facility.
Determination of the Characteristic Values and Variation Ratio for Sensitive Soils
NASA Astrophysics Data System (ADS)
Milutinovici, Emilia; Mihailescu, Daniel
2017-12-01
In 2008, Romania adopted Eurocode 7, part II, regarding the geotechnical investigations - called SR EN1997-2/2008. However a previous standard already existed in Romania, by using the mathematical statistics in determination of the calculation values, the requirements of Eurocode can be taken into consideration. The setting of characteristics and calculations values of the geotechnical parameters was finally issued in Romania at the end of 2010 at standard NP122-2010 - “Norm regarding determination of the characteristic and calculation values of the geotechnical parameters”. This standard allows using of data already known from analysed area and setting the calculation values of geotechnical parameters. However, this possibility exist, it is not performed easy in Romania, considering that there isn’t any centralized system of information coming from the geotechnical studies performed for various objectives of private or national interests. Every company performing geotechnical studies tries to organize its own data base, but unfortunately none of them use existing centralized data. When determining the values of calculation, an important role is played by the variation ratio of the characteristic values of a geotechnical parameter. There are recommendations in the mentioned Norm, that could be taken into account, regarding the limits of the variation ratio, but these values are mentioned for Quaternary age soils only, normally consolidated, with a content of organic material < 5%. All of the difficult soils are excluded from the Norm even if they exist and affect the construction foundations on more than a half of the Romania’s surface. A type of difficult soil, extremely widespread on the Romania’s territory, is the contractile soil (with high swelling and contractions, very sensitive to the seasonal moisture variations). This type of material covers and influences the construction foundations in one over third of Romania’s territory. This work is proposing to be a step in determination of limits of the variation ratios for the contractile soils category, for the most used geotechnical parameters in the Romanian engineering practice, namely: the index of consistency and the cohesion.
Liu, Fei; Ye, Lanhan; Peng, Jiyu; Song, Kunlin; Shen, Tingting; Zhang, Chu; He, Yong
2018-02-27
Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R 2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where R c 2 and R p 2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice.
Ye, Lanhan; Song, Kunlin; Shen, Tingting
2018-01-01
Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where Rc2 and Rp2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice. PMID:29495445
NASA Astrophysics Data System (ADS)
Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang
2017-10-01
Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.
Devonshire, Alison S; Elaswarapu, Ramnath; Foy, Carole A
2010-11-24
Gene expression profiling is an important approach for detecting diagnostic and prognostic biomarkers, and predicting drug safety. The development of a wide range of technologies and platforms for measuring mRNA expression makes the evaluation and standardization of transcriptomic data problematic due to differences in protocols, data processing and analysis methods. Thus, universal RNA standards, such as those developed by the External RNA Controls Consortium (ERCC), are proposed to aid validation of research findings from diverse platforms such as microarrays and RT-qPCR, and play a role in quality control (QC) processes as transcriptomic profiling becomes more commonplace in the clinical setting. Panels of ERCC RNA standards were constructed in order to test the utility of these reference materials (RMs) for performance characterization of two selected gene expression platforms, and for discrimination of biomarker profiles between groups. The linear range, limits of detection and reproducibility of microarray and RT-qPCR measurements were evaluated using panels of RNA standards. Transcripts of low abundance (≤ 10 copies/ng total RNA) showed more than double the technical variability compared to higher copy number transcripts on both platforms. Microarray profiling of two simulated 'normal' and 'disease' panels, each consisting of eight different RNA standards, yielded robust discrimination between the panels and between standards with varying fold change ratios, showing no systematic effects due to different labelling and hybridization runs. Also, comparison of microarray and RT-qPCR data for fold changes showed agreement for the two platforms. ERCC RNA standards provide a generic means of evaluating different aspects of platform performance, and can provide information on the technical variation associated with quantification of biomarkers expressed at different levels of physiological abundance. Distinct panels of standards serve as an ideal quality control tool kit for determining the accuracy of fold change cut-off threshold and the impact of experimentally-derived noise on the discrimination of normal and disease profiles.
Kennedy, M F; Tutton, P J; Barkla, D H
1985-09-15
Circadian variations in cell proliferation in normal tissues have been recognised for many years but comparable phenomena in neoplastic tissues appear not to have been reported. Adenomas and carcinomas were induced in mouse colon by injection of dimethylhydrazine (DMH) and cell proliferation in these tumors was measured stathmokinetically. In normal intestine cell proliferation is fastest at night whereas in both adenomas and carcinomas it was found to be slower at night than in the middle of the day. Chemical sympathectomy was found to abolish the circadian variation in tumor cell proliferation.
New reference materials for nitrogen-isotope-ratio measurements
Böhlke, John Karl; Gwinn, C. J.; Coplen, T. B.
1993-01-01
Three new reference materials were manufactured for calibration of relative stable nitrogen-isotope-ratio measurements: USGS25 (ammonium sulfate) d15N' = -30 per mil; USGS26 (ammonium sulfate) d15N' = +54 per mil; USGS32 (potassium nitrate) d15N' = +180 per mil, where d15N', relative to atmospheric nitrogen, is an approximate value subject to change following interlaboratory comparisons. These materials are isotopically homogeneous in aliquots at least as small as 10 µmol N2 (or about 1-2 mg of salt). The new reference materials greatly extend the range of d15N values of internationally distributed standards, and they allow normalization of d15N measurements over almost the full range of known natural isotope variation on Earth. The methods used to produce these materials may be adapted to produce homogeneous local laboratory standards for routine use.
A systematic evaluation of normalization methods in quantitative label-free proteomics.
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2018-01-01
To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Tian, Lunfu; Wang, Lili; Gao, Wei; Weng, Xiaodong; Liu, Jianhui; Zou, Deshuang; Dai, Yichun; Huang, Shuke
2018-03-01
For the quantitative analysis of the principal elements in lead-antimony-tin alloys, directly X-ray fluorescence (XRF) method using solid metal disks introduces considerable errors due to the microstructure inhomogeneity. To solve this problem, an aqueous solution XRF method is proposed for determining major amounts of Sb, Sn, Pb in lead-based bearing alloys. The alloy samples were dissolved by a mixture of nitric acid and tartaric acid to eliminated the effects of microstructure of these alloys on the XRF analysis. Rh Compton scattering was used as internal standard for Sb and Sn, and Bi was added as internal standard for Pb, to correct for matrix effects, instrumental and operational variations. High-purity lead, antimony and tin were used to prepare synthetic standards. Using these standards, calibration curves were constructed for the three elements after optimizing the spectrometer parameters. The method has been successfully applied to the analysis of lead-based bearing alloys and is more rapid than classical titration methods normally used. The determination results are consistent with certified values or those obtained by titrations.
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
Detection limit used for early warning in public health surveillance.
Kobari, Tsuyoshi; Iwaki, Kazuo; Nagashima, Tomomi; Ishii, Fumiyoshi; Hayashi, Yuzuru; Yajima, Takehiko
2009-06-01
A theory of detection limit, developed in analytical chemistry, is applied to public health surveillance to detect an outbreak of national emergencies such as natural disaster and bioterrorism. In this investigation, the influenza epidemic around the Tokyo area from 2003 to 2006 is taken as a model of normal and large-scale epidemics. The detection limit of the normal epidemic is used as a threshold with a specified level of significance to identify a sign of the abnormal epidemic among the daily variation in anti-influenza drug sales at community pharmacies. While auto-correlation of data is often an obstacle to an unbiased estimator of standard deviation involved in the detection limit, the analytical theory (FUMI) can successfully treat the auto-correlation of the drug sales in the same way as the auto-correlation appearing as 1/f noise in many analytical instruments.
Integrating the ECG power-line interference removal methods with rule-based system.
Kumaravel, N; Senthil, A; Sridhar, K S; Nithiyanandam, N
1995-01-01
The power-line frequency interference in electrocardiographic signals is eliminated to enhance the signal characteristics for diagnosis. The power-line frequency normally varies +/- 1.5 Hz from its standard value of 50 Hz. In the present work, the performances of the linear FIR filter, Wave digital filter (WDF) and adaptive filter for the power-line frequency variations from 48.5 to 51.5 Hz in steps of 0.5 Hz are studied. The advantage of the LMS adaptive filter in the removal of power-line frequency interference even if the frequency of interference varies by +/- 1.5 Hz from its normal value of 50 Hz over other fixed frequency filters is very well justified. A novel method of integrating rule-based system approach with linear FIR filter and also with Wave digital filter are proposed. The performances of Rule-based FIR filter and Rule-based Wave digital filter are compared with the LMS adaptive filter.
Rolling Bearing Life Prediction-Past, Present, and Future
NASA Technical Reports Server (NTRS)
Zaretsky, E V; Poplawski, J. V.; Miller, C. R.
2000-01-01
Comparisons were made between the life prediction formulas of Lundberg and Palmgren, Ioannides and Harris, and Zaretsky and full-scale ball and roller bearing life data. The effect of Weibull slope on bearing life prediction was determined. Life factors are proposed to adjust the respective life formulas to the normalized statistical life distribution of each bearing type. The Lundberg-Palmgren method resulted in the most conservative life predictions compared to Ioannides and Harris, and Zaretsky methods which produced statistically similar results. Roller profile can have significant effects on bearing life prediction results. Roller edge loading can reduce life by as much as 98 percent. The resultant predicted life not only depends on the life equation used but on the Weibull slope assumed, the least variation occurring with the Zaretsky equation. The load-life exponent p of 10/3 used in the American National Standards Institute (ANSI)/American Bearing Manufacturers Association (ABMA)/International Organization for Standardization (ISO) standards is inconsistent with the majority roller bearings designed and used today.
Investigations of magnesium, histamine and immunoglobulins dynamics in acute urticaria.
Mureşan, D; Oană, A; Nicolae, I; Alecu, M; Moşescu, L; Benea, V; Flueraş, M
1990-01-01
In 42 urticaria patients, magnesium, histamine and IgE were dosed. Magnesium, IgE and histamine variations were followed in urticaria evolution, during acute phase and clinical remission. We noticed magnesium, histamine, IgE values variations depending on disease evolution and applied therapeutic scheme. Therefore: At disease starting point, histamine presented 3.5 times higher values than the normal ones. The value decreases following a curve which tends to reach normal values during clinical remission. At disease starting point, magnesium presented values under the inferior limit of the normal, 0.5 m mol/L respectively, as a mean. The value increases towards the normal limit during clinical remission. Immunoglobulins E follow a similar curve to histamine one, presenting 1,250 U/L values at the starting point, that, under medication, influence decrease between normal limits (800 U/L), during clinical remission. Analyzing the variations of biochemical parameters, the authors emphasize magnesium substitution treatment in urticaria.
Fingert, John H.; Robin, Alan L.; Scheetz, Todd E.; Kwon, Young H.; Liebmann, Jeffrey M.; Ritch, Robert; Alward, Wallace L.M.
2016-01-01
Purpose To investigate the role of TANK-binding kinase 1 (TBK1) gene copy-number variations (ie, gene duplications and triplications) in the pathophysiology of various open-angle glaucomas. Methods In previous studies, we discovered that copy-number variations in the TBK1 gene are associated with normal-tension glaucoma. Here, we investigated the prevalence of copy-number variations in cohorts of patients with other open-angle glaucomas—juvenile-onset open-angle glaucoma (n=30), pigmentary glaucoma (n=209), exfoliation glaucoma (n=225), and steroid-induced glaucoma (n=79)—using a quantitative polymerase chain reaction assay. Results No TBK1 gene copy-number variations were detected in patients with juvenile-onset open-angle glaucoma, pigmentary glaucoma, or steroid-induced glaucoma. A TBK1 gene duplication was detected in one (0.44%) of the 225 exfoliation glaucoma patients. Conclusions TBK1 gene copy-number variations (gene duplications and triplications) have been previously associated with normal-tension glaucoma. An exploration of other open-angle glaucomas detected a TBK1 copy-number variation in a patient with exfoliation glaucoma, which is the first example of a TBK1 mutation in a glaucoma patient with a diagnosis other than normal-tension glaucoma. A broader phenotypic range may be associated with TBK1 copy-number variations, although mutations in this gene are most often detected in patients with normal-tension glaucoma. PMID:27881886
Fingert, John H; Robin, Alan L; Scheetz, Todd E; Kwon, Young H; Liebmann, Jeffrey M; Ritch, Robert; Alward, Wallace L M
2016-08-01
To investigate the role of TANK-binding kinase 1 ( TBK1 ) gene copy-number variations (ie, gene duplications and triplications) in the pathophysiology of various open-angle glaucomas. In previous studies, we discovered that copy-number variations in the TBK1 gene are associated with normal-tension glaucoma. Here, we investigated the prevalence of copy-number variations in cohorts of patients with other open-angle glaucomas-juvenile-onset open-angle glaucoma (n=30), pigmentary glaucoma (n=209), exfoliation glaucoma (n=225), and steroid-induced glaucoma (n=79)-using a quantitative polymerase chain reaction assay. No TBK1 gene copy-number variations were detected in patients with juvenile-onset open-angle glaucoma, pigmentary glaucoma, or steroid-induced glaucoma. A TBK1 gene duplication was detected in one (0.44%) of the 225 exfoliation glaucoma patients. TBK1 gene copy-number variations (gene duplications and triplications) have been previously associated with normal-tension glaucoma. An exploration of other open-angle glaucomas detected a TBK1 copy-number variation in a patient with exfoliation glaucoma, which is the first example of a TBK1 mutation in a glaucoma patient with a diagnosis other than normal-tension glaucoma. A broader phenotypic range may be associated with TBK1 copy-number variations, although mutations in this gene are most often detected in patients with normal-tension glaucoma.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
A comparison of vowel normalization procedures for language variation research
NASA Astrophysics Data System (ADS)
Adank, Patti; Smits, Roel; van Hout, Roeland
2004-11-01
An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels (``vowel-extrinsic'' information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself (``vowel-intrinsic'' information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., ``formant-extrinsic'' F2-F1). .
A comparison of vowel normalization procedures for language variation research.
Adank, Patti; Smits, Roel; van Hout, Roeland
2004-11-01
An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels ("vowel-extrinsic" information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself ("vowel-intrinsic" information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., "formant-extrinsic" F2-F1).
The impact of signal normalization on seizure detection using line length features.
Logesparan, Lojini; Rodriguez-Villegas, Esther; Casson, Alexander J
2015-10-01
Accurate automated seizure detection remains a desirable but elusive target for many neural monitoring systems. While much attention has been given to the different feature extractions that can be used to highlight seizure activity in the EEG, very little formal attention has been given to the normalization that these features are routinely paired with. This normalization is essential in patient-independent algorithms to correct for broad-level differences in the EEG amplitude between people, and in patient-dependent algorithms to correct for amplitude variations over time. It is crucial, however, that the normalization used does not have a detrimental effect on the seizure detection process. This paper presents the first formal investigation into the impact of signal normalization techniques on seizure discrimination performance when using the line length feature to emphasize seizure activity. Comparing five normalization methods, based upon the mean, median, standard deviation, signal peak and signal range, we demonstrate differences in seizure detection accuracy (assessed as the area under a sensitivity-specificity ROC curve) of up to 52 %. This is despite the same analysis feature being used in all cases. Further, changes in performance of up to 22 % are present depending on whether the normalization is applied to the raw EEG itself or directly to the line length feature. Our results highlight the median decaying memory as the best current approach for providing normalization when using line length features, and they quantify the under-appreciated challenge of providing signal normalization that does not impair seizure detection algorithm performance.
Coplen, Tyler B.; Wassenaar, Leonard I
2015-01-01
RationaleAlthough laser absorption spectrometry (LAS) instrumentation is easy to use, its incorporation into laboratory operations is not easy, owing to extensive offline manipulation of comma-separated-values files for outlier detection, between-sample memory correction, nonlinearity (δ-variation with water amount) correction, drift correction, normalization to VSMOW-SLAP scales, and difficulty in performing long-term QA/QC audits.MethodsA Microsoft Access relational-database application, LIMS (Laboratory Information Management System) for Lasers 2015, was developed. It automates LAS data corrections and manages clients, projects, samples, instrument-sample lists, and triple-isotope (δ17O, δ18O, and δ2H values) instrumental data for liquid-water samples. It enables users to (1) graphically evaluate sample injections for variable water yields and high isotope-delta variance; (2) correct for between-sample carryover, instrumental drift, and δ nonlinearity; and (3) normalize final results to VSMOW-SLAP scales.ResultsCost-free LIMS for Lasers 2015 enables users to obtain improved δ17O, δ18O, and δ2H values with liquid-water LAS instruments, even those with under-performing syringes. For example, LAS δ2HVSMOW measurements of USGS50 Lake Kyoga (Uganda) water using an under-performing syringe having ±10 % variation in water concentration gave +31.7 ± 1.6 ‰ (2-σ standard deviation), compared with the reference value of +32.8 ± 0.4 ‰, after correction for variation in δ value with water concentration, between-sample memory, and normalization to the VSMOW-SLAP scale.ConclusionsLIMS for Lasers 2015 enables users to create systematic, well-founded instrument templates, import δ2H, δ17O, and δ18O results, evaluate performance with automatic graphical plots, correct for δ nonlinearity due to variable water concentration, correct for between-sample memory, adjust for drift, perform VSMOW-SLAP normalization, and perform long-term QA/QC audits easily. Published in 2015. This article is a U.S. Government work and is in the public domain in the USA.
Removing inter-subject technical variability in magnetic resonance imaging studies.
Fortin, Jean-Philippe; Sweeney, Elizabeth M; Muschelli, John; Crainiceanu, Ciprian M; Shinohara, Russell T
2016-05-15
Magnetic resonance imaging (MRI) intensities are acquired in arbitrary units, making scans non-comparable across sites and between subjects. Intensity normalization is a first step for the improvement of comparability of the images across subjects. However, we show that unwanted inter-scan variability associated with imaging site, scanner effect, and other technical artifacts is still present after standard intensity normalization in large multi-site neuroimaging studies. We propose RAVEL (Removal of Artificial Voxel Effect by Linear regression), a tool to remove residual technical variability after intensity normalization. As proposed by SVA and RUV [Leek and Storey, 2007, 2008, Gagnon-Bartsch and Speed, 2012], two batch effect correction tools largely used in genomics, we decompose the voxel intensities of images registered to a template into a biological component and an unwanted variation component. The unwanted variation component is estimated from a control region obtained from the cerebrospinal fluid (CSF), where intensities are known to be unassociated with disease status and other clinical covariates. We perform a singular value decomposition (SVD) of the control voxels to estimate factors of unwanted variation. We then estimate the unwanted factors using linear regression for every voxel of the brain and take the residuals as the RAVEL-corrected intensities. We assess the performance of RAVEL using T1-weighted (T1-w) images from more than 900 subjects with Alzheimer's disease (AD) and mild cognitive impairment (MCI), as well as healthy controls from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. We compare RAVEL to two intensity-normalization-only methods: histogram matching and White Stripe. We show that RAVEL performs best at improving the replicability of the brain regions that are empirically found to be most associated with AD, and that these regions are significantly more present in structures impacted by AD (hippocampus, amygdala, parahippocampal gyrus, enthorinal area, and fornix stria terminals). In addition, we show that the RAVEL-corrected intensities have the best performance in distinguishing between MCI subjects and healthy subjects using the mean hippocampal intensity (AUC=67%), a marked improvement compared to results from intensity normalization alone (AUC=63% and 59% for histogram matching and White Stripe, respectively). RAVEL is promising for many other imaging modalities. Published by Elsevier Inc.
Data preprocessing methods of FT-NIR spectral data for the classification cooking oil
NASA Astrophysics Data System (ADS)
Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli
2014-12-01
This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.
Hassan, Quazi K.; Bourque, Charles P.-A.; Meng, Fan-Rui; Cox, Roger M.
2007-01-01
In this paper we develop a method to estimate land-surface water content in a mostly forest-dominated (humid) and topographically-varied region of eastern Canada. The approach is centered on a temperature-vegetation wetness index (TVWI) that uses standard 8-day MODIS-based image composites of land surface temperature (TS) and surface reflectance as primary input. In an attempt to improve estimates of TVWI in high elevation areas, terrain-induced variations in TS are removed by applying grid, digital elevation model-based calculations of vertical atmospheric pressure to calculations of surface potential temperature (θS). Here, θS corrects TS to the temperature value to what it would be at mean sea level (i.e., ∼101.3 kPa) in a neutral atmosphere. The vegetation component of the TVWI uses 8-day composites of surface reflectance in the calculation of normalized difference vegetation index (NDVI) values. TVWI and corresponding wet and dry edges are based on an interpretation of scatterplots generated by plotting θS as a function of NDVI. A comparison of spatially-averaged field measurements of volumetric soil water content (VSWC) and TVWI for the 2003-2005 period revealed that variation with time to both was similar in magnitudes. Growing season, point mean measurements of VSWC and TVWI were 31.0% and 28.8% for 2003, 28.6% and 29.4% for 2004, and 40.0% and 38.4% for 2005, respectively. An evaluation of the long-term spatial distribution of land-surface wetness generated with the new θS-NDVI function and a process-based model of soil water content showed a strong relationship (i.e., r2 = 95.7%). PMID:28903212
Xue, Gang; Song, Wen-qi; Li, Shu-chao
2015-01-01
In order to achieve the rapid identification of fire resistive coating for steel structure of different brands in circulating, a new method for the fast discrimination of varieties of fire resistive coating for steel structure by means of near infrared spectroscopy was proposed. The raster scanning near infrared spectroscopy instrument and near infrared diffuse reflectance spectroscopy were applied to collect the spectral curve of different brands of fire resistive coating for steel structure and the spectral data were preprocessed with standard normal variate transformation(standard normal variate transformation, SNV) and Norris second derivative. The principal component analysis (principal component analysis, PCA)was used to near infrared spectra for cluster analysis. The analysis results showed that the cumulate reliabilities of PC1 to PC5 were 99. 791%. The 3-dimentional plot was drawn with the scores of PC1, PC2 and PC3 X 10, which appeared to provide the best clustering of the varieties of fire resistive coating for steel structure. A total of 150 fire resistive coating samples were divided into calibration set and validation set randomly, the calibration set had 125 samples with 25 samples of each variety, and the validation set had 25 samples with 5 samples of each variety. According to the principal component scores of unknown samples, Mahalanobis distance values between each variety and unknown samples were calculated to realize the discrimination of different varieties. The qualitative analysis model for external verification of unknown samples is a 10% recognition ration. The results demonstrated that this identification method can be used as a rapid, accurate method to identify the classification of fire resistive coating for steel structure and provide technical reference for market regulation.
Lebasnier, Adrien; Legallois, Damien; Bienvenu, Boris; Bergot, Emmanuel; Desmonts, Cédric; Zalcman, Gérard; Agostini, Denis; Manrique, Alain
2018-06-01
The identification of cardiac sarcoidosis is challenging as there is no gold standard consensually admitted for its diagnosis. The aim of this study was to evaluate the diagnostic value of the assessment of cardiac dynamic 18 F-fluoro-2-deoxyglucose positron emission tomography ( 18 F-FDG PET/CT) and net influx constant (Ki) in patients suspected of cardiac sarcoidosis. Data obtained from 30 biopsy-proven sarcoidosis patients suspected of cardiac sarcoidosis who underwent a 50-min list-mode cardiac dynamic 18 F-FDG PET/CT after a 24 h high-fat and low-carbohydrate diet were analyzed. A normalized coefficient of variation of quantitative glucose influx constant, calculated as the ratio: standard deviation of the segmental Ki (min -1 )/global Ki (min -1 ) was determined using a validated software (Carimas ® 2.4, Turku PET Centre). Cardiac sarcoidosis was diagnosed according to the Japanese Ministry of Health and Welfare criteria. Receiving operating curve analysis was performed to determine sensitivity and specificity of cardiac dynamic 18 F-FDG PET/CT analysis to diagnose cardiac sarcoidosis. Six out of 30 patients (20%) were diagnosed as having cardiac sarcoidosis. Myocardial glucose metabolism was significantly heterogeneous in patients with cardiac sarcoidosis who showed significantly higher normalized coefficient of variation values compared to patients without cardiac sarcoidosis (0.513 ± 0.175 vs. 0.205 ± 0.081; p = 0.0007). Using ROC curve analysis, we found a cut-off value of 0.38 for the diagnosis of cardiac sarcoidosis with a sensitivity of 100% and a specificity of 91%. Our results suggest that quantitative analysis of cardiac dynamic 18 F-FDG PET/CT could be a useful tool for the diagnosis of cardiac sarcoidosis.
Rapid detection of talcum powder in tea using FT-IR spectroscopy coupled with chemometrics
Li, Xiaoli; Zhang, Yuying; He, Yong
2016-01-01
This paper investigated the feasibility of Fourier transform infrared transmission (FT-IR) spectroscopy to detect talcum powder illegally added in tea based on chemometric methods. Firstly, 210 samples of tea powder with 13 dose levels of talcum powder were prepared for FT-IR spectra acquirement. In order to highlight the slight variations in FT-IR spectra, smoothing, normalize and standard normal variate (SNV) were employed to preprocess the raw spectra. Among them, SNV preprocessing had the best performance with high correlation of prediction (RP = 0.948) and low root mean square error of prediction (RMSEP = 0.108) of partial least squares (PLS) model. Then 18 characteristic wavenumbers were selected based on a hybrid of backward interval partial least squares (biPLS) regression, competitive adaptive reweighted sampling (CARS) algorithm and successive projections algorithm (SPA). These characteristic wavenumbers only accounted for 0.64% of the full wavenumbers. Following that, 18 characteristic wavenumbers were used to build linear and nonlinear determination models by PLS regression and extreme learning machine (ELM), respectively. The optimal model with RP = 0.963 and RMSEP = 0.137 was achieved by ELM algorithm. These results demonstrated that FT-IR spectroscopy with chemometrics could be used successfully to detect talcum powder in tea. PMID:27468701
A Comparison Study of Normal-Incidence Acoustic Impedance Measurements of a Perforate Liner
NASA Technical Reports Server (NTRS)
Schultz, Todd; Liu, Fei; Cattafesta, Louis; Sheplak, Mark; Jones, Michael
2009-01-01
The eduction of the acoustic impedance for liner configurations is fundamental to the reduction of noise from modern jet engines. Ultimately, this property must be measured accurately for use in analytical and numerical propagation models of aircraft engine noise. Thus any standardized measurement techniques must be validated by providing reliable and consistent results for different facilities and sample sizes. This paper compares normal-incidence acoustic impedance measurements using the two-microphone method of ten nominally identical individual liner samples from two facilities, namely 50.8 mm and 25.4 mm square waveguides at NASA Langley Research Center and the University of Florida, respectively. The liner chosen for this investigation is a simple single-degree-of-freedom perforate liner with resonance and anti-resonance frequencies near 1.1 kHz and 2.2 kHz, respectively. The results show that the ten measurements have the most variation around the anti-resonance frequency, where statistically significant differences exist between the averaged results from the two facilities. However, the sample-to-sample variation is comparable in magnitude to the predicted cross-sectional area-dependent cavity dissipation differences between facilities, providing evidence that the size of the present samples does not significantly influence the results away from anti-resonance.
Modeling extreme hurricane damage in the United States using generalized Pareto distribution
NASA Astrophysics Data System (ADS)
Dey, Asim Kumer
Extreme value distributions are used to understand and model natural calamities, man made catastrophes and financial collapses. Extreme value theory has been developed to study the frequency of such events and to construct a predictive model so that one can attempt to forecast the frequency of a disaster and the amount of damage from such a disaster. In this study, hurricane damages in the United States from 1900-2012 have been studied. The aim of the paper is three-fold. First, normalizing hurricane damage and fitting an appropriate model for the normalized damage data. Secondly, predicting the maximum economic damage from a hurricane in future by using the concept of return period. Finally, quantifying the uncertainty in the inference of extreme return levels of hurricane losses by using a simulated hurricane series, generated by bootstrap sampling. Normalized hurricane damage data are found to follow a generalized Pareto distribution. tion. It is demonstrated that standard deviation and coecient of variation increase with the return period which indicates an increase in uncertainty with model extrapolation.
Kravdal, O
2002-01-01
Study objectives: Sociodemographic differentials in cancer survival have occasionally been studied by using a relative survival approach, where all cause mortality among persons with a cancer diagnosis is compared with that among similar persons without such a diagnosis ("normal" mortality). One should ideally take into account that this "normal" mortality not only depends on age, sex, and period, but also various other sociodemographic variables. However, this has very rarely been done. A method that permits such variations to be considered is presented here, as an alternative to an existing technique, and is compared with a relative survival model where these variations are disregarded and two other methods that have often been used. Design, setting, and participants: The focus is on how education and marital status affect the survival from 12 common cancer types among men and women aged 40–80. Four different types of hazard models are estimated, and differences between effects are compared. The data are from registers and censuses and cover the entire Norwegian population for the years 1960–1991. There are more than 100 000 deaths to cancer patients in this material. Main results and conclusions: A model for registered cancer mortality among cancer patients gives results that for most, but not all, sites are very similar to those from a relative survival approach where educational or marital variations in "normal" mortality are taken into account. A relative survival approach without consideration of these sociodemographic variations in "normal" mortality gives more different results, the most extreme example being the doubling of the marital differentials in survival from prostate cancer. When neither sufficient data on cause of death nor on variations in "normal" mortality are available, one may well choose the simplest method, which is to model all cause mortality among cancer patients. There is little reason to bother with the estimation of a relative-survival model that does not allow sociodemographic variations in "normal" mortality beyond those related to age, sex, and period. Fortunately, both these less data demanding models perform well for the most aggressive cancers. PMID:11896140
ESR signals in a core from the lake Baikal: implications for climate change
NASA Astrophysics Data System (ADS)
Toyoda, S.; Hidaka, K.; Takamatsu, N.
2002-12-01
Electron spin resonance dating method has been used for obtaining ages of Quaternary events using speleothem, corals, shells, hydroxyapatite in tooth enamel, gypsum, and quartz (Ikeya, 1993). Recently, it was also found that an ESR signal in quartz of loess is useful to discuss the variation of its origin (e. g. Ono et al., 1998). The method is based on the signal intensity of the heat treated (gamma ray irradiation and heating, Toyoda and Ikeya, 1991) E 1_f center (an unpaired electron at an oxygen vacancy) correlates the original (crystallization) age of quartz (e.g. Toyoda and Hattori, 2000). If there is variation in ages of basement rocks (origin of loess), ESR signal intensity may differentiate the origins. We applied the present method to sediments taken from the core of the lake Baikal with the length of 600m. The ESR intensity of the heat treated E1_f center was determined by an ESR measurement at room temperature for about 100 mg of the bulk samples, with a microwave power of 0.01 mW, field modulation amplitude of 0.1 mT, and with a scan range of 5 mT around g=2.001 after gamma ray irradiation to 1 kGy and subsequent heating at 300C. The ESR signal of the E1_f center was clearly observed although other minerals are also included in the bulk sample. The peak to peak height was taken as the signal intensity after normalizing the height with the gain (the instrumental setting at the time of measurement), mass, and the intensity of the standard simultaneously measured with the sample. The concentrations of the quartz in the bulk samples were obtained by the X ray diffraction study, normalizing the peak intensity with a standard CeO sample. The variation of the ESR signal intensity with depth of the core will be presented together with the possible climate change which may have caused the variation. References M. Ikeya (1993) New applications of electron spin resonance, dating, dosimetry and imaging, World Scientific. Y. Ono, T. Naruse, M. Ikeya, H. Kohno, and S. Toyoda (1998) Global Planet. Change, 18, 129-135. S. Toyoda and M. Ikeya (1991) Geochem. J. 25, 437-445. S. Toyoda and W. Hattori (2000) Appl. Radiat. Isot., 52, 1351-1356.
ESTIMATION OF EFFECTIVE SHEAR STRESS WORKING ON FLAT SHEET MEMBRANE USING FLUIDIZED MEDIA IN MBRs
NASA Astrophysics Data System (ADS)
Zaw, Hlwan Moe; Li, Tairi; Nagaoka, Hiroshi; Mishima, Iori
This study was aimed at estimating effective shear stress working on flat sheet membrane by the addition of fluidized media in MBRs. In both of laboratory-scale aeration tanks with and without fluidized media, shear stress variations on membrane surface and water phase velocity variations were measured and MBR operation was conducted. For the evaluation of the effective shear stress working on membrane surface to mitigate membrane surface, simulation of trans-membrane pressure increase was conducted. It was shown that the time-averaged absolute value of shear stress was smaller in the reactor with fluidized media than without fluidized media. However, due to strong turbulence in the reactor with fluidized media caused by interaction between water-phase and media and also due to the direct interaction between membrane surface and fluidized media, standard deviation of shear stress on membrane surface was larger in the reactor with fluidized media than without media. Histograms of shear stress variation data were fitted well to normal distribution curves and mean plus three times of standard deviation was defined to be a maximum shear stress value. By applying the defined maximum shear stress to a membrane fouling model, trans-membrane pressure curve in the MBR experiment was simulated well by the fouling model indicting that the maximum shear stress, not time-averaged shear stress, can be regarded as an effective shear stress to prevent membrane fouling in submerged flat-sheet MBRs.
Variation is the universal: making cultural evolution work in developmental psychology.
Kline, Michelle Ann; Shamsudheen, Rubeena; Broesch, Tanya
2018-04-05
Culture is a human universal, yet it is a source of variation in human psychology, behaviour and development. Developmental researchers are now expanding the geographical scope of research to include populations beyond relatively wealthy Western communities. However, culture and context still play a secondary role in the theoretical grounding of developmental psychology research, far too often. In this paper, we highlight four false assumptions that are common in psychology, and that detract from the quality of both standard and cross-cultural research in development. These assumptions are: (i) the universality assumption , that empirical uniformity is evidence for universality, while any variation is evidence for culturally derived variation; (ii) the Western centrality assumption , that Western populations represent a normal and/or healthy standard against which development in all societies can be compared; (iii) the deficit assumption , that population-level differences in developmental timing or outcomes are necessarily due to something lacking among non-Western populations; and (iv) the equivalency assumption , that using identical research methods will necessarily produce equivalent and externally valid data, across disparate cultural contexts. For each assumption, we draw on cultural evolutionary theory to critique and replace the assumption with a theoretically grounded approach to culture in development. We support these suggestions with positive examples drawn from research in development. Finally, we conclude with a call for researchers to take reasonable steps towards more fully incorporating culture and context into studies of development, by expanding their participant pools in strategic ways. This will lead to a more inclusive and therefore more accurate description of human development.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).
Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping
2018-02-01
An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Chest Radiograph Findings in Childhood Pneumonia Cases From the Multisite PERCH Study.
Fancourt, Nicholas; Deloria Knoll, Maria; Baggett, Henry C; Brooks, W Abdullah; Feikin, Daniel R; Hammitt, Laura L; Howie, Stephen R C; Kotloff, Karen L; Levine, Orin S; Madhi, Shabir A; Murdoch, David R; Scott, J Anthony G; Thea, Donald M; Awori, Juliet O; Barger-Kamate, Breanna; Chipeta, James; DeLuca, Andrea N; Diallo, Mahamadou; Driscoll, Amanda J; Ebruke, Bernard E; Higdon, Melissa M; Jahan, Yasmin; Karron, Ruth A; Mahomed, Nasreen; Moore, David P; Nahar, Kamrun; Naorat, Sathapana; Ominde, Micah Silaba; Park, Daniel E; Prosperi, Christine; Wa Somwe, Somwe; Thamthitiwat, Somsak; Zaman, Syed M A; Zeger, Scott L; O'Brien, Katherine L
2017-06-15
Chest radiographs (CXRs) are frequently used to assess pneumonia cases. Variations in CXR appearances between epidemiological settings and their correlation with clinical signs are not well documented. The Pneumonia Etiology Research for Child Health project enrolled 4232 cases of hospitalized World Health Organization (WHO)-defined severe and very severe pneumonia from 9 sites in 7 countries (Bangladesh, the Gambia, Kenya, Mali, South Africa, Thailand, and Zambia). At admission, each case underwent a standardized assessment of clinical signs and pneumonia risk factors by trained health personnel, and a CXR was taken that was interpreted using the standardized WHO methodology. CXRs were categorized as abnormal (consolidation and/or other infiltrate), normal, or uninterpretable. CXRs were interpretable in 3587 (85%) cases, of which 1935 (54%) were abnormal (site range, 35%-64%). Cases with abnormal CXRs were more likely than those with normal CXRs to have hypoxemia (45% vs 26%), crackles (69% vs 62%), tachypnea (85% vs 80%), or fever (20% vs 16%) and less likely to have wheeze (30% vs 38%; all P < .05). CXR consolidation was associated with a higher case fatality ratio at 30-day follow-up (13.5%) compared to other infiltrate (4.7%) or normal (4.9%) CXRs. Clinically diagnosed pneumonia cases with abnormal CXRs were more likely to have signs typically associated with pneumonia. However, CXR-normal cases were common, and clinical signs considered indicative of pneumonia were present in substantial proportions of these cases. CXR-consolidation cases represent a group with an increased likelihood of death at 30 days post-discharge. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Bruner, E; Mantini, S; Guerrini, V; Ciccarelli, A; Giombini, A; Borrione, P; Pigozzi, F; Ripani, M
2009-09-01
Baropodometrical digital techniques map the pressures exerted on the foot plant during both static and dynamic loadings. The study of the distribution of such pressures makes it possible to evaluate the postural and locomotory biomechanics together with its pathological variations. This paper is aimed at evaluating the integration between baropodometric analysis (pressure distribution) and geometrical models (shape of the footprints), investigating the pattern of variation associated with normal plantar morphology. The sample includes 91 individuals (47 males, 44 females), ranging from 5 to 85 years of age (mean and standard deviation = 40 + or - 24).The first component of variation is largely associated with the breadth of the isthmus, along a continuous gradient of increasing/decreasing flattening of the foot plant. This character being dominant upon the whole set of morphological components even in a non-pathological sample, such multivariate computation may represent a good diagnostic tool to quantify its degree of expression in individual subject or group samples. Sexual differences are not significant, and allometric variations associated with increasing plantar surface or stature are not quantitatively relevant. There are some differences between adult and young individuals, associated in the latter with a widening of the medial and posterior areas. These results provide a geometrical framework of baropodometrical analysis, suggesting possible future applications in diagnosis and basic research.
Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung
2013-03-01
The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.
Slowly progressive aphasia associated with surface dyslexia.
Chiacchio, L; Grossi, D; Stanzione, M; Trojano, L
1993-03-01
We report an Italian patient affected by slowly progressive aphasia (SPA) lasting since four years when he first came to our observation. During the successive four years, we documented a progressive language decline resembling transcortical sensory aphasia, associated with a reading disorder corresponding to surface dyslexia, a form extremely rare in patients with native transparent language. His performance at standard intelligence tasks remained in the normal range, without any variation. CT scan showed left temporal atrophy. We emphasize the heterogeneity of the syndrome of SPA and suggest that it can represent one of the pictures of focal cortical degenerative disease, with variable onset, progression, and evolution.
Multifrequency Retrieval of Cloud Ice Particle Size Distributions
2005-01-01
distribution ( Testud et al., 2001) to represent the PSD. The normalized gamma distribution has several advantages over a typical gamma PSD. A typical gamma...variation correlated with variation in ýL ( Testud et al., 2001). This variation on N, with P, requires a priori restrictions on the variance in R in...Geoscience & Rem. Sensing, 40, 541-549. Testud , J., S. Oury, R. A. Black, P. Amayenc, and X. Dou, 2001: The Concept of "Normalized" Distibution to Describe
Discontinuous Mode Power Supply
NASA Technical Reports Server (NTRS)
Lagadinos, John; Poulos, Ethel
2012-01-01
A document discusses the changes made to a standard push-pull inverter circuit to avoid saturation effects in the main inverter power supply. Typically, in a standard push-pull arrangement, the unsymmetrical primary excitation causes variations in the volt second integral of each half of the excitation cycle that could lead to the establishment of DC flux density in the magnetic core, which could eventually cause saturation of the main inverter transformer. The relocation of the filter reactor normally placed across the output of the power supply solves this problem. The filter reactor was placed in series with the primary circuit of the main inverter transformer, and is presented as impedance against the sudden changes on the input current. The reactor averaged the input current in the primary circuit, avoiding saturation of the main inverter transformer. Since the implementation of the described change, the above problem has not reoccurred, and failures in the main power transistors have been avoided.
When A Standard Candle Flickers
NASA Technical Reports Server (NTRS)
Wilson-Hodge, Colleen A.; Cherry, Michael L.; Case, Gary L.; Baumgartner, Wayne H.; Beklen Elif; Bhat, P. Narayana; Briggs, Michael S.; Camero-Arranz, Ascension; Chaplin, Vandiver; Connaughton, Valerie;
2011-01-01
The Crab Nebula is the only hard X-ray source in the sky that is both bright enough and steady enough to be easily used as a standard candle. As a result, it has been used as a normalization standard by most X-ray/gamma ray telescopes. Although small-scale variations in the nebula are well-known, since the start of science operations of the Fermi Gamma-ray Burst Monitor (GBM) in August 2008 a 7% (70 mcrab) decline has been observed in the overall Crab Nebula flux in the 15-50 keV band, measured with the Earth occultation technique. This decline is independently confirmed in the 15-50 keV band with three other instruments: the Swift Burst Alert Telescope (Swift/BAT), the Rossi X-ray Timing Explorer Proportional Counter Array (RXTE/PCA), and the INTErnational Gamma-Ray Astrophysics Laboratory Imager on Board INTEGRAL (IBIS). A similar decline is also observed in the 3 - 15 keV data from the RXTE/PCA and in the 50 - 100 keV band with GBM, Swift/BAT, and INTEGRAL/IBIS. The change in the pulsed flux measured with RXTE/PCA since 1999 is consistent with the pulsar spin-down, indicating that the observed changes are nebular. Correlated variations in the Crab Nebula flux on a 3 year timescale are also seen independently with the PCA, BAT, and IBIS from 2005 to 2008, with a flux minimum in April 2007. As of August 2010, the current flux has declined below the 2007 minimum.
Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.
Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo
2015-12-01
The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Minimum Information about a Genotyping Experiment (MIGEN)
Huang, Jie; Mirel, Daniel; Pugh, Elizabeth; Xing, Chao; Robinson, Peter N.; Pertsemlidis, Alexander; Ding, LiangHao; Kozlitina, Julia; Maher, Joseph; Rios, Jonathan; Story, Michael; Marthandan, Nishanth; Scheuermann, Richard H.
2011-01-01
Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata. PMID:22180825
NASA Astrophysics Data System (ADS)
Geddes, Earl Russell
The details of the low frequency sound field for a rectangular room can be studied by the use of an established analytic technique--separation of variables. The solution is straightforward and the results are well-known. A non -rectangular room has boundary conditions which are not separable and therefore other solution techniques must be used. This study shows that the finite element method can be adapted for use in the study of sound fields in arbitrary shaped enclosures. The finite element acoustics problem is formulated and the modification of a standard program, which is necessary for solving acoustic field problems, is examined. The solution of the semi-non-rectangular room problem (one where the floor and ceiling remain parallel) is carried out by a combined finite element/separation of variables approach. The solution results are used to construct the Green's function for the low frequency sound field in five rooms (or data cases): (1) a rectangular (Louden) room; (2) The smallest wall of the Louden room canted 20 degrees from normal; (3) The largest wall of the Louden room canted 20 degrees from normal; (4) both the largest and the smallest walls are canted 20 degrees; and (5) a five-sided room variation of Case 4. Case 1, the rectangular room was calculated using both the finite element method and the separation of variables technique. The results for the two methods are compared in order to access the accuracy of the finite element method models. The modal damping coefficient are calculated and the results examined. The statistics of the source and receiver average normalized RMS P('2) responses in the 80 Hz, 100 Hz, and 125 Hz one-third octave bands are developed. The receiver averaged pressure response is developed to determine the effect of the source locations on the response. Twelve source locations are examined and the results tabulated for comparison. The effect of a finite sized source is looked at briefly. Finally, the standard deviation of the spatial pressure response is studied. The results for this characteristic show that it not significantly different in any of the rooms. The conclusions of the study are that only the frequency variations of the pressure response are affected by a room's shape. Further, in general, the simplest modification of a rectangular room (i.e., changing the angle of only one of the smallest walls), produces the most pronounced decrease of the pressure response variations in the low frequency region.
Satyam, Shakta Mani; Bairy, Laxminarayana Kurady; Pirasanthan, Rajadurai
2014-12-01
Zincovit tablet is combination of grape seed extract and zinc containing multivitamin-mineral nutritional food supplement. To investigate the influence of single combined formulation of grape seed extract and zinc containing multivitamin-mineral nutritional food supplement tablets (Zincovit) on lipid profile in normal and diet-induced hypercholesterolemic rats. Anti-hyperlipidemic activity of combined formulation of grape seed extract and Zincovit tablets doses ranged from 40 to 160 mg/kg, p.o. was evaluated in normal and diet-induced hypercholesterolemic rats. Hypercholesterolemic animals treated with combined formulation of grape seed extract and Zincovit tablets (nutritional food supplement) at 40, 80 and 160 mg/kg exhibited drastic decrease in serum triglycerides, total cholesterol, LDL-C, VLDL-C and rise of HDL-C in comparison to hypercholesterolemic control group animals. The anti-hyperlipidemic effect of single combined formulation of grape seed extract and Zincovit tablet was comparable with the standard drug atorvastatin treated animals and the variations were statistically non-significant. There was no significant impact of combined formulation of grape seed extract and Zincovit tablets on lipid profile among normal animals in comparison with normal control group. The present study demonstrated that the single combined formulation of grape seed extract and Zincovit tablet is the potential functional nutritional food supplements that could offer a novel therapeutic opportunity against diet-induced hypercholesterolemia in Wistar rats.
Variations in Articulatory Movement with Changes in Speech Task.
ERIC Educational Resources Information Center
Tasko, Stephen M.; McClean, Michael D.
2004-01-01
Studies of normal and disordered articulatory movement often rely on the use of short, simple speech tasks. However, the severity of speech disorders can be observed to vary markedly with task. Understanding task-related variations in articulatory kinematic behavior may allow for an improved understanding of normal and disordered speech motor…
SU-F-J-115: Target Volume and Artifact Evaluation of a New Device-Less 4D CT Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, R; Pan, T
2016-06-15
Purpose: 4DCT is often used in radiation therapy treatment planning to define the extent of motion of the visible tumor (IGTV). Recent available software allows 4DCT images to be created without the use of an external motion surrogate. This study aims to compare this device-less algorithm to a standard device-driven technique (RPM) in regards to artifacts and the creation of treatment volumes. Methods: 34 lung cancer patients who had previously received a cine 4DCT scan on a GE scanner with an RPM determined respiratory signal were selected. Cine images were sorted into 10 phases based on both the RPM signalmore » and the device-less algorithm. Contours were created on standard and device-less maximum intensity projection (MIP) images using a region growing algorithm and manual adjustment to remove other structures. Variations in measurements due to intra-observer differences in contouring were assessed by repeating a subset of 6 patients 2 additional times. Artifacts in each phase image were assessed using normalized cross correlation at each bed position transition. A score between +1 (artifacts “better” in all phases for device-less) and −1 (RPM similarly better) was assigned for each patient based on these results. Results: Device-less IGTV contours were 2.1 ± 1.0% smaller than standard IGTV contours (not significant, p = 0.15). The Dice similarity coefficient (DSC) was 0.950 ± 0.006 indicating good similarity between the contours. Intra-observer variation resulted in standard deviations of 1.2 percentage points in percent volume difference and 0.005 in DSC measurements. Only two patients had improved artifacts with RPM, and the average artifact score (0.40) was significantly greater than zero. Conclusion: Device-less 4DCT can be used in place of the standard method for target definition due to no observed difference between standard and device-less IGTVs. Phase image artifacts were significantly reduced with the device-less method.« less
Development of an Uncertainty Model for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.
2010-01-01
This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.
Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.
2015-01-01
Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842
Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E
2015-10-01
Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.
Nebuya, S; Noshiro, M; Yonemoto, A; Tateno, S; Brown, B H; Smallwood, R H; Milnes, P
2006-05-01
Inter-subject variability has caused the majority of previous electrical impedance tomography (EIT) techniques to focus on the derivation of relative or difference measures of in vivo tissue resistivity. Implicit in these techniques is the requirement for a reference or previously defined data set. This study assesses the accuracy and optimum electrode placement strategy for a recently developed method which estimates an absolute value of organ resistivity without recourse to a reference data set. Since this measurement of tissue resistivity is absolute, in Ohm metres, it should be possible to use EIT measurements for the objective diagnosis of lung diseases such as pulmonary oedema and emphysema. However, the stability and reproducibility of the method have not yet been investigated fully. To investigate these problems, this study used a Sheffield Mk3.5 system which was configured to operate with eight measurement electrodes. As a result of this study, the absolute resistivity measurement was found to be insensitive to the electrode level between 4 and 5 cm above the xiphoid process. The level of the electrode plane was varied between 2 cm and 7 cm above the xiphoid process. Absolute lung resistivity in 18 normal subjects (age 22.6 +/- 4.9, height 169.1 +/- 5.7 cm, weight 60.6 +/- 4.5 kg, body mass index 21.2 +/- 1.6: mean +/- standard deviation) was measured during both normal and deep breathing for 1 min. Three sets of measurements were made over a period of several days on each of nine of the normal male subjects. No significant differences in absolute lung resistivity were found, either during normal tidal breathing between the electrode levels of 4 and 5 cm (9.3 +/- 2.4 Omega m, 9.6 +/- 1.9 Omega m at 4 and 5 cm, respectively: mean +/- standard deviation) or during deep breathing between the electrode levels of 4 and 5 cm (10.9 +/- 2.9 Omega m and 11.1 +/- 2.3 Omega m, respectively: mean +/- standard deviation). However, the differences in absolute lung resistivity between normal and deep tidal breathing at the same electrode level are significant. No significant difference was found in the coefficient of variation between the electrode levels of 4 and 5 cm (9.5 +/- 3.6%, 8.5 +/- 3.2% at 4 and 5 cm, respectively: mean +/- standard deviation in individual subjects). Therefore, the electrode levels of 4 and 5 cm above the xiphoid process showed reasonable reliability in the measurement of absolute lung resistivity both among individuals and over time.
Skoruppa, Katrin; Rosen, Stuart
2014-06-01
In this study, the authors explored phonological processing in connected speech in children with hearing loss. Specifically, the authors investigated these children's sensitivity to English place assimilation, by which alveolar consonants like t and n can adapt to following sounds (e.g., the word ten can be realized as tem in the phrase ten pounds). Twenty-seven 4- to 8-year-old children with moderate to profound hearing impairments, using hearing aids (n = 10) or cochlear implants (n = 17), and 19 children with normal hearing participated. They were asked to choose between pictures of familiar (e.g., pen) and unfamiliar objects (e.g., astrolabe) after hearing t- and n-final words in sentences. Standard pronunciations (Can you find the pen dear?) and assimilated forms in correct (… pem please?) and incorrect contexts (… pem dear?) were presented. As expected, the children with normal hearing chose the familiar object more often for standard forms and correct assimilations than for incorrect assimilations. Thus, they are sensitive to word-final place changes and compensate for assimilation. However, the children with hearing impairment demonstrated reduced sensitivity to word-final place changes, and no compensation for assimilation. Restricted analyses revealed that children with hearing aids who showed good perceptual skills compensated for assimilation in plosives only.
NASA/American Cancer Society High-Resolution Flow Cytometry Project-I
NASA Technical Reports Server (NTRS)
Thomas, R. A.; Krishan, A.; Robinson, D. M.; Sams, C.; Costa, F.
2001-01-01
BACKGROUND: The NASA/American Cancer Society (ACS) flow cytometer can simultaneously analyze the electronic nuclear volume (ENV) and DNA content of cells. This study describes the schematics, resolution, reproducibility, and sensitivity of biological standards analyzed on this unit. METHODS: Calibrated beads and biological standards (lymphocytes, trout erythrocytes [TRBC], calf thymocytes, and tumor cells) were analyzed for ENV versus DNA content. Parallel data (forward scatter versus DNA) from a conventional flow cytometer were obtained. RESULTS: ENV linearity studies yielded an R value of 0.999. TRBC had a coefficient of variation (CV) of 1.18 +/- 0.13. DNA indexes as low as 1.02 were detectable. DNA content of lymphocytes from 42 females was 1.9% greater than that for 60 males, with a noninstrumental variability in total DNA content of 0.5%. The ENV/DNA ratio was constant in 15 normal human tissue samples, but differed in the four animal species tested. The ENV/DNA ratio for a hypodiploid breast carcinoma was 2.3 times greater than that for normal breast tissue. CONCLUSIONS: The high-resolution ENV versus DNA analyses are highly reliable, sensitive, and can be used for the detection of near-diploid tumor cells that are difficult to identify with conventional cytometers. ENV/DNA ratio may be a useful parameter for detection of aneuploid populations.
Average of delta: a new quality control tool for clinical laboratories.
Jones, Graham R D
2016-01-01
Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
Eliminating Undesirable Variation in Neonatal Practice: Balancing Standardization and Customization.
Balakrishnan, Maya; Raghavan, Aarti; Suresh, Gautham K
2017-09-01
Consistency of care and elimination of unnecessary and harmful variation are underemphasized aspects of health care quality. This article describes the prevalence and patterns of practice variation in health care and neonatology; discusses the potential role of standardization as a solution to eliminating wasteful and harmful practice variation, particularly when it is founded on principles of evidence-based medicine; and proposes ways to balance standardization and customization of practice to ultimately improve the quality of neonatal care. Copyright © 2017 Elsevier Inc. All rights reserved.
Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
Using the range to calculate the coefficient of variation.
Rhiel, G Steven
2004-12-01
In this research a coefficient of variation (CVhigh-low) is calculated from the highest and lowest values in a set of data. Use of CVhigh-low when the population is normal, leptokurtic, and skewed is discussed. The statistic is the most effective when sampling from the normal distribution. With the leptokurtic distributions, CVhigh-low works well for comparing the relative variability between two or more distributions but does not provide a very "good" point estimate of the population coefficient of variation. With skewed distributions CVhigh-low works well in identifying which data set has the more relative variation but does not specify how much difference there is in the variation. It also does not provide a "good" point estimate.
ERIC Educational Resources Information Center
Chon, HeeCheong; Kraft, Shelly Jo; Zhang, Jingfei; Loucks, Torrey; Ambrose, Nicoline G.
2013-01-01
Purpose: Delayed auditory feedback (DAF) is known to induce stuttering-like disfluencies (SLDs) and cause speech rate reductions in normally fluent adults, but the reason for speech disruptions is not fully known, and individual variation has not been well characterized. Studying individual variation in susceptibility to DAF may identify factors…
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2013 CFR
2013-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2012 CFR
2012-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Smoke Standards for Non-Normalized...
On the variation of the Nimbus 7 total solar irradiance
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1992-01-01
For the interval December 1978 to April 1991, the value of the mean total solar irradiance, as measured by the Nimbus-7 Earth Radiation Budget Experiment channel 10C, was 1,372.02 Wm(exp -2), having a standard deviation of 0.65 Wm(exp -2), a coefficient of variation (mean divided by the standard deviation) of 0.047 percent, and a normal deviate z (a measure of the randomness of the data) of -8.019 (inferring a highly significant non-random variation in the solar irradiance measurements, presumably related to the action of the solar cycle). Comparison of the 12-month moving average (also called the 13-month running mean) of solar irradiance to those of the usual descriptors of the solar cycle (i.e., sunspot number, 10.7-cm solar radio flux, and total corrected sunspot area) suggests possibly significant temporal differences. For example, solar irradiance is found to have been greatest on or before mid 1979 (leading solar maximum for cycle 21), lowest in early 1987 (lagging solar minimum for cycle 22), and was rising again through late 1990 (thus, lagging solar maximum for cycle 22), having last reported values below those that were seen in 1979 (even though cycles 21 and 22 were of comparable strength). Presuming a genuine correlation between solar irradiance and the solar cycle (in particular, sunspot number) one infers that the correlation is weak (having a coefficient of correlation r less than 0.84) and that major excursions (both as 'excesses' and 'deficits') have occurred (about every 2 to 3 years, perhaps suggesting a pulsating Sun).
NASA Astrophysics Data System (ADS)
Li, Haotian; Wei, Meng; Li, Duo; Liu, Yajing; Kim, YoungHee; Zhou, Shiyong
2018-01-01
Recent GPS observations show that slow slip events in south central Alaska are segmented along strike. Here we review several mechanisms that might contribute to this segmentation and focus on two: along-strike variation of slab geometry and effective normal stress. We then test them by running numerical simulations in the framework of rate-and-state friction with a nonplanar fault geometry. Results show that the segmentation is most likely related to the along-strike variation of the effective normal stress on the fault plane caused by the Yakutat Plateau. The Yakutat Plateau could affect the effective normal stress by either lowering the pore pressure in Upper Cook Inlet due to less fluids release or increasing the normal stress due to the extra buoyancy caused by the subducted Yakutat Plateau. We prefer the latter explanation because it is consistent with the relative amplitudes of the effective normal stress in Upper and Lower Cook Inlet and there is very little along-strike variation in Vp/Vs ratio in the fault zone from receiver function analysis. However, we cannot exclude the possibility that the difference in effective normal stress results from along-strike variation of pore pressure due to the uncertainties in the Vp/Vs estimates. Our work implies that a structural anomaly can have a long-lived effect on the subduction zone slip behavior and might be a driving factor on along-strike segmentation of slow slip events.
Missense variations in the cystic fibrosis gene: Heteroduplex formation in the F508C mutation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macek, M. Jr.; Ladanyi, L.; Buerger, J.
1992-11-01
Kobayashi et al. (1990) have described missense variations in the conserved region of exon 10 of the cystic fibrosis (CF) transmembrane conductance regulator gene. In their paper, two [Delta]F508/F508C compound heterozygous individuals were reported. Clinical and epithelial physiological studies in both cases were normal, suggesting that the substitution of cysteine for phenylalanine at position 508, the F508C mutation, is benign. However, Kerem et al. reported a patient with this substitution who had typical symptoms of CF. In routine [Delta]F508 mutation screening by visualization of the 3-bp deletion on a 12% polyacrylamide gel the authors detected an abnormal heteroduplex in themore » father of a CF patient of German origin. Subsequent direct sequencing of the PCR product confirmed that this clinically normal father is a compound heterozygote for the [Delta]F508/F508C mutations. This heteroduplex is slightly different from the usual heteroduplex in [Delta]F508/F508C heteroduplex was not published, it is likely that similar cases can be overseen during the widely performed [Delta]F508 mutation screening by PAGE. Detection of more cases, such as the one presented here, together with careful, standardized clinical examination of the proband, would be valuable to verify the nature of this mutation. 4 refs., 1 fig.« less
NASA Technical Reports Server (NTRS)
Noor, A. K.; Malik, M.
2000-01-01
A study is made of the effects of variation in the lamination and geometric parameters, and boundary conditions of multi-layered composite panels on the accuracy of the detailed response characteristics obtained by five different modeling approaches. The modeling approaches considered include four two-dimensional models, each with five parameters to characterize the deformation in the thickness direction, and a predictor-corrector approach with twelve displacement parameters. The two-dimensional models are first-order shear deformation theory, third-order theory; a theory based on trigonometric variation of the transverse shear stresses through the thickness, and a discrete layer theory. The combination of the following four key elements distinguishes the present study from previous studies reported in the literature: (1) the standard of comparison is taken to be the solutions obtained by using three-dimensional continuum models for each of the individual layers; (2) both mechanical and thermal loadings are considered; (3) boundary conditions other than simply supported edges are considered; and (4) quantities compared include detailed through-the-thickness distributions of transverse shear and transverse normal stresses. Based on the numerical studies conducted, the predictor-corrector approach appears to be the most effective technique for obtaining accurate transverse stresses, and for thermal loading, none of the two-dimensional models is adequate for calculating transverse normal stresses, even when used in conjunction with three-dimensional equilibrium equations.
Juang, K W; Lee, D Y; Ellsworth, T R
2001-01-01
The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.
Juan, J A; Prat, J; Vera, P; Hoyos, J V; Sánchez-Lacuesta, J; Peris, J L; Dejoz, R; Alepuz, R
1992-09-01
A theoretical analysis by a finite elements model (FEM) of some external fixators (Hoffmann, Wagner, Orthofix and Ilizarov) was carried out. This study considered a logarithmic progress of callus elastic characteristics. A standard configuration of each fixator was defined where design and application characteristics were modified. A comparison among standard configurations and influence of every variation was made with regard to displacement and load transmission at the fracture site. An experimental evaluation of standard configurations was performed with a testing machine. After experimental validation of the theoretical model was achieved, an application of physiological loads which act on a fractured limb during normal gait was analysed. A minimal contribution from an external fixator to the total rigidity of the bone-callus-fixator system was assessed when a callus showing minimum elastic characteristics had just been established. Insufficient rigidity from the fixation devices to assure an adequate immobilization during the early stages of fracture healing was verified. However, regardless of the external fixator, callus development was the overriding element for the rigidity of the fixator-bone system.
Shokouhi, Sepideh; Mckay, John W; Baker, Suzanne L; Kang, Hakmook; Brill, Aaron B; Gwirtsman, Harry E; Riddle, William R; Claassen, Daniel O; Rogers, Baxter P
2016-01-15
Semiquantitative methods such as the standardized uptake value ratio (SUVR) require normalization of the radiotracer activity to a reference tissue to monitor changes in the accumulation of amyloid-β (Aβ) plaques measured with positron emission tomography (PET). The objective of this study was to evaluate the effect of reference tissue normalization in a test-retest (18)F-florbetapir SUVR study using cerebellar gray matter, white matter (two different segmentation masks), brainstem, and corpus callosum as reference regions. We calculated the correlation between (18)F-florbetapir PET and concurrent cerebrospinal fluid (CSF) Aβ1-42 levels in a late mild cognitive impairment cohort with longitudinal PET and CSF data over the course of 2 years. In addition to conventional SUVR analysis using mean and median values of normalized brain radiotracer activity, we investigated a new image analysis technique-the weighted two-point correlation function (wS2)-to capture potentially more subtle changes in Aβ-PET data. Compared with the SUVRs normalized to cerebellar gray matter, all cerebral-to-white matter normalization schemes resulted in a higher inverse correlation between PET and CSF Aβ1-42, while the brainstem normalization gave the best results (high and most stable correlation). Compared with the SUVR mean and median values, the wS2 values were associated with the lowest coefficient of variation and highest inverse correlation to CSF Aβ1-42 levels across all time points and reference regions, including the cerebellar gray matter. The selection of reference tissue for normalization and the choice of image analysis method can affect changes in cortical (18)F-florbetapir uptake in longitudinal studies.
Pelle, Edward; Dong, Kelly; Pernodet, Nadine
2015-01-01
Sirtuins are post-translational modifiers that affect transcriptional signaling, metabolism, and DNA repair. Although originally identified as gene silencers capable of extending cell lifespan, the involvement of sirtuins in many different areas of cell biology has now become widespread. Our approach has been to study the temporal variation and also the effect of environmental stressors, such as ultraviolet B (UVB) and ozone, on sirtuin expression in human epidermal keratinocytes. In this report, we measured the variation in expression of several sirtuins over time and also show how a low dose of UVB can affect this pattern of expression. Moreover, we correlated these changes to variations in hydrogen peroxide (H2O2) and ATP levels. Our data show significant variations in normal sirtuin expression, which may indicate a generalized response by sirtuins to cell cycle kinetics. These results also demonstrate that sirtuins as a family of molecules are sensitive to UVB-induced disruption and may suggest a new paradigm for determining environmental stress on aging and provide direction for the development of new cosmetic products.
40 CFR 190.10 - Standards for normal operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Standards for normal operations. 190.10 Section 190.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards...
Tricarico, Carmela; Pinzani, Pamela; Bianchi, Simonetta; Paglierani, Milena; Distante, Vito; Pazzagli, Mario; Bustin, Stephen A; Orlando, Claudio
2002-10-15
Careful normalization is essential when using quantitative reverse transcription polymerase chain reaction assays to compare mRNA levels between biopsies from different individuals or cells undergoing different treatment. Generally this involves the use of internal controls, such as mRNA specified by a housekeeping gene, ribosomal RNA (rRNA), or accurately quantitated total RNA. The aim of this study was to compare these methods and determine which one can provide the most accurate and biologically relevant quantitative results. Our results show significant variation in the expression levels of 10 commonly used housekeeping genes and 18S rRNA, both between individuals and between biopsies taken from the same patient. Furthermore, in 23 breast cancers samples mRNA and protein levels of a regulated gene, vascular endothelial growth factor (VEGF), correlated only when normalized to total RNA, as did microvessel density. Finally, mRNA levels of VEGF and the most popular housekeeping gene, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), were significantly correlated in the colon. Our results suggest that the use of internal standards comprising single housekeeping genes or rRNA is inappropriate for studies involving tissue biopsies.
Determinants of physiological uptake of 18F-fluorodeoxyglucose in palatine tonsils.
Birkin, Emily; Moore, Katherine S; Huang, Chao; Christopher, Marshall; Rees, John I; Jayaprakasam, Vetrisudar; Fielding, Patrick A
2018-06-01
To determine the extent of physiological variation of uptake of F-flurodeoxyglucose (FDG) within palatine tonsils. To define normal limits for side-to-side variation and characterize factors affecting tonsillar uptake of FDG.Over a period of 16 weeks 299 adult patients at low risk for head and neck pathology, attending our center for FDG positron emission tomography/computed tomography (PET/CT) scans were identified. The maximum standardized uptake value (SUVmax) was recorded for each palatine tonsil. For each patient age, gender, smoking status, scan indication and prior tonsillectomy status as well as weather conditions were noted.There was a wide variation in palatine tonsil FDG uptake with SUVmax values between 1.3 and 11.4 recorded. There was a strong left to right correlation for tonsillar FDG uptake within each patient (P < .01). The right palatine tonsil showed increased FDG uptake (4.63) compared to the left (4.47) (P < .01). In multivariate analysis, gender, scan indication, and prevailing weather had no significant impact of tonsillar FDG uptake. Lower tonsillar uptake was seen in patients with a prior history of tonsillectomy (4.13) than those without this history (4.64) (P < .01). Decreasing tonsillar FDG uptake was seen with advancing age (P < .01). Significantly lower uptake was seen in current smokers (SUVmax 4.2) than nonsmokers (SUV 4.9) (P = .03).Uptake of FDG in palatine tonsils is variable but shows a strong side-to-side correlation. We suggest the left/ right SUVmax ratio as a guide to normality with a first to 99th percentiles of (0.70-1.36) for use in patients not suspected to have tonsillar pathology.
A long-term validation of the modernised DC-ARC-OES solid-sample method.
Flórián, K; Hassler, J; Förster, O
2001-12-01
The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
Oku, Yoshifumi; Arimura, Hidetaka; Nguyen, Tran Thi Thao; Hiraki, Yoshiyuki; Toyota, Masahiko; Saigo, Yasumasa; Yoshiura, Takashi; Hirata, Hideki
2016-01-01
This study investigates whether in-room computed tomography (CT)-based adaptive treatment planning (ATP) is robust against interfractional location variations, namely, interfractional organ motions and/or applicator displacements, in 3D intracavitary brachytherapy (ICBT) for uterine cervical cancer. In ATP, the radiation treatment plans, which have been designed based on planning CT images (and/or MR images) acquired just before the treatments, are adaptively applied for each fraction, taking into account the interfractional location variations. 2D and 3D plans with ATP for 14 patients were simulated for 56 fractions at a prescribed dose of 600 cGy per fraction. The standard deviations (SDs) of location displacements (interfractional location variations) of the target and organs at risk (OARs) with 3D ATP were significantly smaller than those with 2D ATP (P < 0.05). The homogeneity index (HI), conformity index (CI) and tumor control probability (TCP) in 3D ATP were significantly higher for high-risk clinical target volumes than those in 2D ATP. The SDs of the HI, CI, TCP, bladder and rectum D2cc, and the bladder and rectum normal tissue complication probability (NTCP) in 3D ATP were significantly smaller than those in 2D ATP. The results of this study suggest that the interfractional location variations give smaller impacts on the planning evaluation indices in 3D ATP than in 2D ATP. Therefore, the 3D plans with ATP are expected to be robust against interfractional location variations in each treatment fraction. PMID:27296250
NASA Astrophysics Data System (ADS)
Masey, Nicola; Gillespie, Jonathan; Heal, Mathew R.; Hamilton, Scott; Beverland, Iain J.
2017-07-01
We assessed the precision and accuracy of nitrogen dioxide (NO2) concentrations over 2-day, 3-day and 7-day exposure periods measured with the following types of passive diffusion samplers: standard (open) Palmes tubes; standard Ogawa samplers with commercially-prepared Ogawa absorbent pads (Ogawa[S]); and modified Ogawa samplers with absorbent-impregnated stainless steel meshes normally used in Palmes tubes (Ogawa[P]). We deployed these passive samplers close to the inlet of a chemiluminescence NO2 analyser at an urban background site in Glasgow, UK over 32 discrete measurement periods. Duplicate relative standard deviation was <7% for all passive samplers. The Ogawa[P], Ogawa[S] and Palmes samplers explained 93%, 87% and 58% of temporal variation in analyser concentrations respectively. Uptake rates for Palmes and Ogawa[S] samplers were positively and linearly associated with wind-speed (P < 0.01 and P < 0.05 respectively). Computation of adjusted uptake rates using average wind-speed observed during each sampling period increased the variation in analyser concentrations explained by Palmes and Ogawa[S] estimates to 90% and 92% respectively, suggesting that measurements can be corrected for shortening of diffusion path lengths due to wind-speed to improve the accuracy of estimates of short-term NO2 exposure. Monitoring situations where it is difficult to reliably estimate wind-speed variations, e.g. across multiple sites with different unknown exposures to local winds, and personal exposure monitoring, are likely to benefit from protection of these sampling devices from the effects of wind, for example by use of a mesh or membrane across the open end. The uptake rate of Ogawa[P] samplers was not associated with wind-speed resulting in a high correlation between estimated concentrations and observed analyser concentrations. The use of Palmes meshes in Ogawa[P] samplers reduced the cost of sampler preparation and removed uncertainty associated with the unknown manufacturing process for the commercially-prepared collection pads.
Martens, Jürgen
2005-01-01
The hygienic performance of biowaste composting plants to ensure the quality of compost is of high importance. Existing compost quality assurance systems reflect this importance through intensive testing of hygienic parameters. In many countries, compost quality assurance systems are under construction and it is necessary to check and to optimize the methods to state the hygienic performance of composting plants. A set of indicator methods to evaluate the hygienic performance of normal operating biowaste composting plants was developed. The indicator methods were developed by investigating temperature measurements from indirect process tests from 23 composting plants belonging to 11 design types of the Hygiene Design Type Testing System of the German Compost Quality Association (BGK e.V.). The presented indicator methods are the grade of hygienization, the basic curve shape, and the hygienic risk area. The temperature courses of single plants are not distributed normally, but they were grouped by cluster analysis in normal distributed subgroups. That was a precondition to develop the mentioned indicator methods. For each plant the grade of hygienization was calculated through transformation into the standard normal distribution. It shows the part in percent of the entire data set which meet the legal temperature requirements. The hygienization grade differs widely within the design types and falls below 50% for about one fourth of the plants. The subgroups are divided visually into basic curve shapes which stand for different process courses. For each plant the composition of the entire data set out of the various basic curve shapes can be used as an indicator for the basic process conditions. Some basic curve shapes indicate abnormal process courses which can be emended through process optimization. A hygienic risk area concept using the 90% range of variation of the normal temperature courses was introduced. Comparing the design type range of variation with the legal temperature defaults showed hygienic risk areas over the temperature courses which could be minimized through process optimization. The hygienic risk area of four design types shows a suboptimal hygienic performance.
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
Yip, G W; Ho, P P; Woo, K S; Sanderson, J E
1999-09-01
There is a wide variation (13% to 74%) in the reported prevalence of heart failure associated with normal left ventricular (LV) systolic function (diastolic heart failure). There is no published information on this condition in China. To ascertain the prevalence of diastolic heart failure in this community, 200 consecutive patients with the typical features of congestive heart failure were studied with standard 2-dimensional Doppler echocardiography. A LV ejection fraction (LVEF) >45% was considered normal. The results showed that 12.5% had significant valvular heart disease. Of the remaining 175 patients, 132 had a LVEF >45% (75%). Therefore, 66% of patients with a clinical diagnosis of heart failure had a normal LVEF. Heart failure with normal LV systolic function was more common than systolic heart failure in those >70 years old (65% vs 47%; p = 0.015). Most (57%) had an abnormal relaxation pattern in diastole and 14% had a restrictive filling pattern. In the systolic heart failure group, a restrictive filling pattern was more common (46%). There were no significant differences in the sex distribution, etiology, or prevalence of LV hypertrophy between these 2 heart failure groups. In conclusion, heart failure with a normal LVEF or diastolic heart failure is more common than systolic heart failure in Chinese patients with the symptoms of heart failure. This may be related to older age at presentation and the high prevalence of hypertension in this community.
Huang, Jinhai; Liao, Na; Savini, Giacomo; Li, Yuanguang; Bao, Fangjun; Yu, Ye; Yu, Ayong; Wang, Qinmei
2015-02-01
To determine the repeatability and reproducibility of measurements of central corneal thickness (CCT) using optical low-coherence reflectometry (Lenstar LS900; Haag Streit) in normal eyes and post-femtosecond laser in situ keratomileusis (post-FS-LASIK) eyes and evaluate their agreement with ultrasound (US) pachymetry. CCT was measured using Lenstar and US pachymetry sequentially in normal and post-FS-LASIK eyes by 2 experienced observers. Intraoperator repeatability and interoperator reproducibility were assessed by within-subject standard deviation, test-retest repeatability, coefficient of variation (CoV), and intraclass correlation coefficient. Paired t-tests and Bland-Altman plots were used for analyzing agreement between the 2 devices. In this study, 55 healthy subjects and 50 post-FS-LASIK patients were recruited. Test-retest repeatability of Lenstar was within 10 μm, CoV was less than 1.0%, and intraclass correlation coefficient was more than 0.9 in both normal and post-FS-LASIK groups. Mean difference between these methods was 1.4 ± 4.2 μm and -1.7 ± 5.4 μm, respectively. Moreover, measurements of CCT showed narrow 95% limits of agreement (range, normal group: -6.8 and 9.6 μm; post-FS-LASIK group: -12.4 and 8.9 μm), which implied good agreement. Measurements of CCT using Lenstar showed excellent intraoperator repeatability and interoperator reproducibility both in normal eyes and post-FS-LASIK eyes. Measurements of CCT using Lenstar and US pachymetry showed good agreement and both can be used interchangeably.
Variation of fluorescence spectroscopy during the menstrual cycle
NASA Astrophysics Data System (ADS)
Macaulay, Calum; Richards-Kortum, Rebecca; Utzinger, Urs; Fedyk, Amanda; Neely Atkinson, E.; Cox, Dennis; Follen, Michele
2002-06-01
Cervical autofluorescence has been demonstrated to have potential for real-time diagnosis. Inter-patient and intra-patient variations in fluorescence intensity have been measured. Inter-patient measurements may vary by a factor of ten, while intra-patient measurements may vary by a factor of two. Age and menopausal status have been demonstrated to account for some of the variations, while race and smoking have not. In order to explore in detail the role of the menstrual cycle in intra-patient variation, a study was designed to measure fluorescence excitation emission matrices (EEMs) in patients daily throughout one cycle. Ten patients with a history of normal menstrual cycles and normal Papanicolaou smears underwent daily measurements of fluorescence EEMs from three colposcopically normal sites throughout one menstrual cycle. Changes in signals from porphyrin, NADH, and FAD fluorescence and blood absorption were noted when the data was viewed in a graphical format. Visually interpreted features of the EEMs in this graphical format did not appear to correlate with the day of the menstrual cycle with the exception that blood absorption features were more prominent during the menstrual phase (during which bleeding occurs), suggesting that measurements during the menstrual phase should be avoided. Variations in cycle date likely do not account for inter- or intra-patient variations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Yi, B; Prado, K
2015-06-15
Purpose: This work is to investigate the feasibility of a standardized monthly quality check (QC) of LINAC output determination in a multi-site, multi-LINAC institution. The QC was developed to determine individual LINAC output using the same optimized measurement setup and a constant calibration factor for all machines across the institution. Methods: The QA data over 4 years of 7 Varian machines over four sites, were analyzed. The monthly output constancy checks were performed using a fixed source-to-chamber-distance (SCD), with no couch position adjustment throughout the measurement cycle for all the photon energies: 6 and 18MV, and electron energies: 6, 9,more » 12, 16 and 20 MeV. The constant monthly output calibration factor (Nconst) was determined by averaging the machines’ output data, acquired with the same monthly ion chamber. If a different monthly ion chamber was used, Nconst was then re-normalized to consider its different NDW,Co-60. Here, the possible changes of Nconst over 4 years have been tracked, and the precision of output results based on this standardized monthly QA program relative to the TG-51 calibration for each machine was calculated. Any outlier of the group was investigated. Results: The possible changes of Nconst varied between 0–0.9% over 4 years. The normalization of absorbed-dose-to-water calibration factors corrects for up to 3.3% variations of different monthly QA chambers. The LINAC output precision based on this standardized monthly QC relative to the TG-51 output calibration is within 1% for 6MV photon energy and 2% for 18MV and all the electron energies. A human error in one TG-51 report was found through a close scrutiny of outlier data. Conclusion: This standardized QC allows for a reasonably simplified, precise and robust monthly LINAC output constancy check, with the increased sensitivity needed to detect possible human errors and machine problems.« less
45 CFR 156.255 - Rating variations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification Standards § 156.255 Rating variations. (a) Rating areas. A QHP issuer...
45 CFR 156.255 - Rating variations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification Standards § 156.255 Rating variations. (a) Rating areas. A QHP issuer...
45 CFR 156.255 - Rating variations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification Standards § 156.255 Rating variations. (a) Rating areas. A QHP issuer...
Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Tokihiro, E-mail: toyamamoto@ucdavis.edu
Purpose: Radiotherapy (RT) that selectively avoids irradiating highly functional lung regions may reduce pulmonary toxicity, which is substantial in lung cancer RT. Single-energy computed tomography (CT) pulmonary perfusion imaging has several advantages (e.g., higher resolution) over other modalities and has great potential for widespread clinical implementation, particularly in RT. The purpose of this study was to establish proof-of-principle for single-energy CT perfusion imaging. Methods: Single-energy CT perfusion imaging is based on the following: (1) acquisition of end-inspiratory breath-hold CT scans before and after intravenous injection of iodinated contrast agents, (2) deformable image registration (DIR) for spatial mapping of those twomore » CT image data sets, and (3) subtraction of the precontrast image data set from the postcontrast image data set, yielding a map of regional Hounsfield unit (HU) enhancement, a surrogate for regional perfusion. In a protocol approved by the institutional animal care and use committee, the authors acquired CT scans in the prone position for a total of 14 anesthetized canines (seven canines with normal lungs and seven canines with diseased lungs). The elastix algorithm was used for DIR. The accuracy of DIR was evaluated based on the target registration error (TRE) of 50 anatomic pulmonary landmarks per subject for 10 randomly selected subjects as well as on singularities (i.e., regions where the displacement vector field is not bijective). Prior to perfusion computation, HUs of the precontrast end-inspiratory image were corrected for variation in the lung inflation level between the precontrast and postcontrast end-inspiratory CT scans, using a model built from two additional precontrast CT scans at end-expiration and midinspiration. The authors also assessed spatial heterogeneity and gravitationally directed gradients of regional perfusion for normal lung subjects and diseased lung subjects using a two-sample two-tailed t-test. Results: The mean TRE (and standard deviation) was 0.6 ± 0.7 mm (smaller than the voxel dimension) for DIR between pre contrast and postcontrast end-inspiratory CT image data sets. No singularities were observed in the displacement vector fields. The mean HU enhancement (and standard deviation) was 37.3 ± 10.5 HU for normal lung subjects and 30.7 ± 13.5 HU for diseased lung subjects. Spatial heterogeneity of regional perfusion was found to be higher for diseased lung subjects than for normal lung subjects, i.e., a mean coefficient of variation of 2.06 vs 1.59 (p = 0.07). The average gravitationally directed gradient was strong and significant (R{sup 2} = 0.99, p < 0.01) for normal lung dogs, whereas it was moderate and nonsignificant (R{sup 2} = 0.61, p = 0.12) for diseased lung dogs. Conclusions: This canine study demonstrated the accuracy of DIR with subvoxel TREs on average, higher spatial heterogeneity of regional perfusion for diseased lung subjects than for normal lung subjects, and a strong gravitationally directed gradient for normal lung subjects, providing proof-of-principle for single-energy CT pulmonary perfusion imaging. Further studies such as comparison with other perfusion imaging modalities will be necessary to validate the physiological significance.« less
Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain
2017-12-01
To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.
Comparison of CCD astrolabe multi-site solar diameter observations
NASA Astrophysics Data System (ADS)
Andrei, A. H.; Boscardin, S. C.; Chollet, F.; Delmas, C.; Golbasi, O.; Jilinski, E. G.; Kiliç, H.; Laclare, F.; Morand, F.; Penna, J. L.; Reis Neto, E.
2004-11-01
Results are presented of measured variations of the photospheric solar diameter, as concurrently observed at three sites of the R2S3 (Réseau de Suivi au Sol du Rayon Solaire) consortium in 2001. Important solar flux variations appeared in that year, just after the maximum of solar activity cycle 23, make that time stretch particularly promising for a comparison of the multi-site results. The sites are those in Turkey, France and Brasil. All observations are made with similar CCD solar astrolabes, and at nearby effective wavelengths. The data reductions share algorithms, that are alike, the outcomes of which are here treated after applying a normalization correction using the Fried parameter. Since the sites are geographically quite far, atmospheric conditions are dismissed as possible causes of the large common trend found. Owing to particularities of each site, the common continuous observational period extends from April to September. The standard deviation for the daily averages is close to 0.47 arcsec for the three sites. Accordingly, the three series are smoothed by a low-pass-band Fourier filter of 150 observations (typically one month). The main common features found are a declining linear trend, of the order of 0.7 mas/day, and a relative maximum, around MJD 2120, of the order of 100 mas. Standard statistical tests endorse the correlation of the three series.
Dynamic analysis of elastic rubber tired car wheel breaking under variable normal load
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.
2017-10-01
The purpose of the paper is to analyze the dynamics of the braking of the wheel under normal load variations. The paper uses a mathematical simulation method according to which the calculation model of an object as a mechanical system is associated with a dynamically equivalent schematic structure of the automatic control. Transfer function tool analyzing structural and technical characteristics of an object as well as force disturbances were used. It was proved that the analysis of dynamic characteristics of the wheel subjected to external force disturbances has to take into account amplitude and phase-frequency characteristics. Normal load variations impact car wheel braking subjected to disturbances. The closer slip to the critical point is, the higher the impact is. In the super-critical area, load variations cause fast wheel blocking.
Single-cell copy number variation detection
2011-01-01
Detection of chromosomal aberrations from a single cell by array comparative genomic hybridization (single-cell array CGH), instead of from a population of cells, is an emerging technique. However, such detection is challenging because of the genome artifacts and the DNA amplification process inherent to the single cell approach. Current normalization algorithms result in inaccurate aberration detection for single-cell data. We propose a normalization method based on channel, genome composition and recurrent genome artifact corrections. We demonstrate that the proposed channel clone normalization significantly improves the copy number variation detection in both simulated and real single-cell array CGH data. PMID:21854607
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Y; Wang, J; Wang, C
Purpose: To investigate the sensitivity of classic texture features to variations of MRI acquisition parameters. Methods: This study was performed on American College of Radiology (ACR) MRI Accreditation Program Phantom. MR imaging was acquired on a GE 750 3T scanner with XRM explain gradient, employing a T1-weighted images (TR/TE=500/20ms) with the following parameters as the reference standard: number of signal average (NEX) = 1, matrix size = 256×256, flip angle = 90°, slice thickness = 5mm. The effect of the acquisition parameters on texture features with and without non-uniformity correction were investigated respectively, while all the other parameters were keptmore » as reference standard. Protocol parameters were set as follows: (a). NEX = 0.5, 2 and 4; (b).Phase encoding steps = 128, 160 and 192; (c). Matrix size = 128×128, 192×192 and 512×512. 32 classic texture features were generated using the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from each image data set. Normalized range ((maximum-minimum)/mean) was calculated to determine variation among the scans with different protocol parameters. Results: For different NEX, 31 out of 32 texture features’ range are within 10%. For different phase encoding steps, 31 out of 32 texture features’ range are within 10%. For different acquisition matrix size without non-uniformity correction, 14 out of 32 texture features’ range are within 10%; for different acquisition matrix size with non-uniformity correction, 16 out of 32 texture features’ range are within 10%. Conclusion: Initial results indicated that those texture features that range within 10% are less sensitive to variations in T1-weighted MRI acquisition parameters. This might suggest that certain texture features might be more reliable to be used as potential biomarkers in MR quantitative image analysis.« less
NASA Astrophysics Data System (ADS)
Guo, Fang; Li, Xingli; Kuang, Hua; Bai, Yang; Zhou, Huaguo
2016-11-01
The original cost potential field cellular automata describing normal pedestrian evacuation is extended to study more general evacuation scenarios. Based on the cost potential field function, through considering the psychological characteristics of crowd under emergencies, the quantitative formula of behavior variation is introduced to reflect behavioral changes caused by psychology tension. The numerical simulations are performed to investigate the effects of the magnitude of behavior variation, the different pedestrian proportions with different behavior variation and other factors on the evacuation efficiency and process in a room. The spatiotemporal dynamic characteristic during the evacuation process is also discussed. The results show that compared with the normal evacuation, the behavior variation under an emergency does not necessarily lead to the decrease of the evacuation efficiency. At low density, the increase of the behavior variation can improve the evacuation efficiency, while at high density, the evacuation efficiency drops significantly with the increasing amplitude of the behavior variation. In addition, the larger proportion of pedestrian affected by the behavior variation will prolong the evacuation time.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong
2010-04-01
Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.
Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan
2013-11-01
A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.
Fasel, J H; Gingins, P; Kalra, P; Magnenat-Thalmann, N; Baur, C; Cuttat, J F; Muster, M; Gailloud, P
1997-01-01
Endoscopic surgery, also called minimally invasive surgery, is presumed drastically to reduce postoperative morbidity and thus to offer both human and economic benefits. For the surgeon, however, this approach leads to a number of gestural challenges that require extensive training to be mastered. In order to replace experimentation on animals and patients, we developed a simulator for endoscopic surgery. To achieve this goal, a first step was to develop a working prototype, a "standard patient," on which the informatic and microengineering tools could be validated. We used the visible man dataset for this purpose. The external shape of the visible man's liver, his biliary passages, and his extrahepatic portal system turned out to be fully within the standard pattern of normal anatomy. Anatomic variations were observed in the intrahepatic right portal vein, the hepatic veins, and the arterial blood supply to the liver. Thus, the visible man dataset reveals itself to be well suited for the simulation of minimally invasive surgical operation such as endoscopic cholecystectomy.
Rajjoub, Raneem D; Trimboli-Heidler, Carmelina; Packer, Roger J; Avery, Robert A
2015-01-01
To determine the intra- and intervisit reproducibility of circumpapillary retinal nerve fiber layer (RNFL) thickness measures using eye tracking-assisted spectral-domain optical coherence tomography (SD OCT) in children with nonglaucomatous optic neuropathy. Prospective longitudinal study. Circumpapillary RNFL thickness measures were acquired with SD OCT using the eye-tracking feature at 2 separate study visits. Children with normal and abnormal vision (visual acuity ≥ 0.2 logMAR above normal and/or visual field loss) who demonstrated clinical and radiographic stability were enrolled. Intra- and intervisit reproducibility was calculated for the global average and 9 anatomic sectors by calculating the coefficient of variation and intraclass correlation coefficient. Forty-two subjects (median age 8.6 years, range 3.9-18.2 years) met inclusion criteria and contributed 62 study eyes. Both the abnormal and normal vision cohort demonstrated the lowest intravisit coefficient of variation for the global RNFL thickness. Intervisit reproducibility remained good for those with normal and abnormal vision, although small but statistically significant increases in the coefficient of variation were observed for multiple anatomic sectors in both cohorts. The magnitude of visual acuity loss was significantly associated with the global (ß = 0.026, P < .01) and temporal sector coefficient of variation (ß = 0.099, P < .01). SD OCT with eye tracking demonstrates highly reproducible RNFL thickness measures. Subjects with vision loss demonstrate greater intra- and intervisit variability than those with normal vision. Copyright © 2015 Elsevier Inc. All rights reserved.
Group normalization for genomic data.
Ghandi, Mahmoud; Beer, Michael A
2012-01-01
Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.
Group Normalization for Genomic Data
Ghandi, Mahmoud; Beer, Michael A.
2012-01-01
Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets. PMID:22912661
40 CFR 190.10 - Standards for normal operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Standards for normal operations. 190.10 Section 190.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Lee, Ji-Hyun; Yang, Seungman; Park, Jonghyun; Kim, Hee Chan; Kim, Eun-Hee; Jang, Young-Eun; Kim, Jin-Tae; Kim, Hee-Soo
2018-06-19
Respiratory variations in photoplethysmography amplitude enable volume status assessment. However, the contact force between the measurement site and sensor can affect photoplethysmography waveforms. We aimed to evaluate contact force effects on respiratory variations in photoplethysmography waveforms in children under general anesthesia. Children aged 3-5 years were enrolled. After anesthetic induction, mechanical ventilation commenced at a tidal volume of 10 mL/kg. Photoplethysmographic signals were obtained in the supine position from the index finger using a force sensor-integrated clip-type photoplethysmography sensor that increased the contact force from 0-1.4 N for 20 respiratory cycles at each force. The AC amplitude (pulsatile component), DC amplitude (nonpulsatile component), AC/DC ratio, and respiratory variations in photoplethysmography amplitude were calculated. Data from 34 children were analyzed. Seven contact forces at 0.2-N increments were evaluated for each patient. The normalized AC amplitude increased maximally at a contact force of 0.4-0.6 N and decreased with increasing contact force. However, the normalized DC amplitude increased with a contact force exceeding 0.4 N. ΔPOP decreased slightly and increased from the point when the AC amplitude started to decrease as contact force increased. In a 0.2-1.2 N contact force range, significant changes in the normalized AC amplitude, normalized DC amplitude, AC/DC ratio, and respiratory variations in photoplethysmography amplitude were observed. Respiratory variations in photoplethysmography amplitude changed according to variable contact forces; therefore, these measurements may not reflect respiration-induced stroke volume variations. Clinicians should consider contact force bias when interpreting morphological data from photoplethysmography signals. © 2018 John Wiley & Sons Ltd.
Deveau, Michael A; Gutiérrez, Alonso N; Mackie, Thomas R; Tomé, Wolfgang A; Forrest, Lisa J
2010-01-01
Intensity-modulated radiation therapy (IMRT) can be employed to yield precise dose distributions that tightly conform to targets and reduce high doses to normal structures by generating steep dose gradients. Because of these sharp gradients, daily setup variations may have an adverse effect on clinical outcome such that an adjacent normal structure may be overdosed and/or the target may be underdosed. This study provides a detailed analysis of the impact of daily setup variations on optimized IMRT canine nasal tumor treatment plans when variations are not accounted for due to the lack of image guidance. Setup histories of ten patients with nasal tumors previously treated using helical tomotherapy were replanned retrospectively to study the impact of daily setup variations on IMRT dose distributions. Daily setup shifts were applied to IMRT plans on a fraction-by-fraction basis. Using mattress immobilization and laser alignment, mean setup error magnitude in any single dimension was at least 2.5 mm (0-10.0 mm). With inclusions of all three translational coordinates, mean composite offset vector was 5.9 +/- 3.3 mm. Due to variations, a loss of equivalent uniform dose for target volumes of up to 5.6% was noted which corresponded to a potential loss in tumor control probability of 39.5%. Overdosing of eyes and brain was noted by increases in mean normalized total dose and highest normalized dose given to 2% of the volume. Findings suggest that successful implementation of canine nasal IMRT requires daily image guidance to ensure accurate delivery of precise IMRT distributions when non-rigid immobilization techniques are utilized. Unrecognized geographical misses may result in tumor recurrence and/or radiation toxicities to the eyes and brain.
Deveau, Michael A.; Gutiérrez, Alonso N.; Mackie, Thomas R.; Tomé, Wolfgang A.; Forrest, Lisa J.
2009-01-01
Intensity-modulated radiation therapy (IMRT) can be employed to yield precise dose distributions that tightly conform to targets and reduce high doses to normal structures by generating steep dose gradients. Because of these sharp gradients, daily setup variations may have an adverse effect on clinical outcome such that an adjacent normal structure may be overdosed and/or the target may be underdosed. This study provides a detailed analysis of the impact of daily setup variations on optimized IMRT canine nasal tumor treatment plans when variations are not accounted for due to the lack of image guidance. Setup histories of ten patients with nasal tumors previously treated using helical tomotherapy were replanned retrospectively to study the impact of daily setup variations on IMRT dose distributions. Daily setup shifts were applied to IMRT plans on a fraction-by-fraction basis. Using mattress immobilization and laser alignment, mean setup error magnitude in any single dimension was at least 2.5mm (0-10.0mm). With inclusions of all three translational coordinates, mean composite offset vector was 5.9±3.3mm. Due to variations, a loss of equivalent uniform dose (EUD) for target volumes of up to 5.6% was noted which corresponded to a potential loss in TCP of 39.5%. Overdosing of eyes and brain was noted by increases in mean normalized total dose (NTDmean) and highest normalized dose given to 2% of the volume (NTD2%). Findings suggest that successful implementation of canine nasal IMRT requires daily image guidance to ensure accurate delivery of precise IMRT distributions when non-rigid immobilization techniques are utilized. Unrecognized geographical misses may result in tumor recurrence and/or radiation toxicities to the eyes and brain. PMID:20166402
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
NASA Astrophysics Data System (ADS)
Jumelet, Julien; David, Christine; Bekki, Slimane; Keckhut, Philippe
2009-01-01
The determination of stratospheric particle microphysical properties from multiwavelength lidar, including Rayleigh and/or Raman detection, has been widely investigated. However, most lidar systems are uniwavelength operating at 532 nm. Although the information content of such lidar data is too limited to allow the retrieval of the full size distribution, the coupling of two or more uniwavelength lidar measurements probing the same moving air parcel may provide some meaningful size information. Within the ORACLE-O3 IPY project, the coordination of several ground-based lidars and the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) space-borne lidar is planned during measurement campaigns called MATCH-PSC (Polar Stratospheric Clouds). While probing the same moving air masses, the evolution of the measured backscatter coefficient (BC) should reflect the variation of particles microphysical properties. A sensitivity study of 532 nm lidar particle backscatter to variations of particles size distribution parameters is carried out. For simplicity, the particles are assumed to be spherical (liquid) particles and the size distribution is represented with a unimodal log-normal distribution. Each of the four microphysical parameters (i.e. log-normal size distribution parameters, refractive index) are analysed separately, while the three others are remained set to constant reference values. Overall, the BC behaviour is not affected by the initial values taken as references. The total concentration (N0) is the parameter to which BC is least sensitive, whereas it is most sensitive to the refractive index (m). A 2% variation of m induces a 15% variation of the lidar BC, while the uncertainty on the BC retrieval can also reach 15%. This result underlines the importance of having both an accurate lidar inversion method and a good knowledge of the temperature for size distribution retrieval techniques. The standard deviation ([sigma]) is the second parameter to which BC is most sensitive to. Yet, the impact of m and [sigma] on BC variations is limited by the realistic range of their variations. The mean radius (rm) of the size distribution is thus the key parameter for BC, as it can vary several-fold. BC is most sensitive to the presence of large particles. The sensitivity of BC to rm and [sigma] variations increases when the initial size distributions are characterized by low rm and large [sigma]. This makes lidar more suitable to detect particles growing on background aerosols than on volcanic aerosols.
NASA Astrophysics Data System (ADS)
Solanki, Raman; Singh, Narendra; Kiran Kumar, N. V. P.; Rajeev, K.; Dhaka, S. K.
2016-03-01
We present the diurnal variations of surface-layer characteristics during spring (March-May 2013) observed near a mountain ridge at Nainital (29.4°N, 79.5°E, 1926 m above mean sea level), a hill station located in the southern part of the central Himalayas. During spring, this region generally witnesses fair-weather conditions and significant solar heating of the surface, providing favourable conditions for the systematic diurnal evolution of the atmospheric boundary layer. We mainly utilize the three-dimensional wind components and virtual temperature observed with sonic anemometers (sampling at 25 Hz) mounted at 12- and 27-m heights on a meteorological tower. Tilt corrections using the planar-fit method have been applied to convert the measurements to streamline-following coordinate system before estimating turbulence parameters. The airflow at this ridge site is quite different from slope flows. Notwithstanding the prevalence of strong large-scale north-westerly winds, the diurnal variation of the mountain circulation is clearly discernible with the increase of wind speed and a small but distinct change in wind direction during the afternoon period. Such an effect further modulates the surface-layer water vapour content, which increases during the daytime and results in the development of boundary-layer clouds in the evening. The sensible heat flux ( H) shows peak values around noon, with its magnitude increasing from March (222± 46 W m^{-2}) to May (353± 147 W m^{-2}). The diurnal variation of turbulent kinetic energy ( e) is insignificant during March while its mean value is enhanced by 30-50 % of the post-midnight value during the afternoon (1400-1600 IST), delayed by {≈ }2 h compared to the peak in H. This difference between the phase variations of incoming shortwave flux, H and e primarily arise due to the competing effects of turbulent eddies produced by thermals and wind shear, the latter increase significantly with time until nighttime during April-May. Variations of the standard deviations of vertical wind normalized with friction velocity (σ _w/u_{*}) and temperature normalized with scaling temperature (σ _{θ }/T_{*}) as functions of stability parameter ( z / L) indicate that they follow a power-law variation during unstable conditions, with an index of 1/3 for the former and -1/3 for the latter. The coefficients defining the above variations are found in agreement with those derived over flat as well as complex terrain.
A standardised protocol for texture feature analysis of endoscopic images in gynaecological cancer.
Neofytou, Marios S; Tanos, Vasilis; Pattichis, Marios S; Pattichis, Constantinos S; Kyriacou, Efthyvoulos C; Koutsouris, Dimitris D
2007-11-29
In the development of tissue classification methods, classifiers rely on significant differences between texture features extracted from normal and abnormal regions. Yet, significant differences can arise due to variations in the image acquisition method. For endoscopic imaging of the endometrium, we propose a standardized image acquisition protocol to eliminate significant statistical differences due to variations in: (i) the distance from the tissue (panoramic vs close up), (ii) difference in viewing angles and (iii) color correction. We investigate texture feature variability for a variety of targets encountered in clinical endoscopy. All images were captured at clinically optimum illumination and focus using 720 x 576 pixels and 24 bits color for: (i) a variety of testing targets from a color palette with a known color distribution, (ii) different viewing angles, (iv) two different distances from a calf endometrial and from a chicken cavity. Also, human images from the endometrium were captured and analysed. For texture feature analysis, three different sets were considered: (i) Statistical Features (SF), (ii) Spatial Gray Level Dependence Matrices (SGLDM), and (iii) Gray Level Difference Statistics (GLDS). All images were gamma corrected and the extracted texture feature values were compared against the texture feature values extracted from the uncorrected images. Statistical tests were applied to compare images from different viewing conditions so as to determine any significant differences. For the proposed acquisition procedure, results indicate that there is no significant difference in texture features between the panoramic and close up views and between angles. For a calibrated target image, gamma correction provided an acquired image that was a significantly better approximation to the original target image. In turn, this implies that the texture features extracted from the corrected images provided for better approximations to the original images. Within the proposed protocol, for human ROIs, we have found that there is a large number of texture features that showed significant differences between normal and abnormal endometrium. This study provides a standardized protocol for avoiding any significant texture feature differences that may arise due to variability in the acquisition procedure or the lack of color correction. After applying the protocol, we have found that significant differences in texture features will only be due to the fact that the features were extracted from different types of tissue (normal vs abnormal).
Sensitive detection of KIT D816V in patients with mastocytosis.
Tan, Angela; Westerman, David; McArthur, Grant A; Lynch, Kevin; Waring, Paul; Dobrovic, Alexander
2006-12-01
The 2447 A > T pathogenic variation at codon 816 of exon 17 (D816V) in the KIT gene, occurring in systemic mastocytosis (SM), leads to constitutive activation of tyrosine kinase activity and confers resistance to the tyrosine kinase inhibitor imatinib mesylate. Thus detection of this variation in SM patients is important for determining treatment strategy, but because the population of malignant cells carrying this variation is often small relative to the normal cell population, standard molecular detection methods can be unsuccessful. We developed 2 methods for detection of KIT D816V in SM patients. The first uses enriched sequencing of mutant alleles (ESMA) after BsmAI restriction enzyme digestion, and the second uses an allele-specific competitive blocker PCR (ACB-PCR) assay. We used these methods to assess 26 patients undergoing evaluation for SM, 13 of whom had SM meeting WHO classification criteria (before variation testing), and we compared the results with those obtained by direct sequencing. The sensitivities of the ESMA and the ACB-PCR assays were 1% and 0.1%, respectively. According to the ACB-PCR assay results, 65% (17/26) of patients were positive for D816V. Of the 17 positive cases, only 23.5% (4/17) were detected by direct sequencing. ESMA detected 2 additional exon 17 pathogenic variations, D816Y and D816N, but detected only 12 (70.5%) of the 17 D816V-positive cases. Overall, 100% (15/15) of the WHO-classified SM cases were codon 816 pathogenic variation positive. These findings demonstrate that the ACB-PCR assay combined with ESMA is a rapid and highly sensitive approach for detection of KIT D816V in SM patients.
Oku, Yoshifumi; Arimura, Hidetaka; Nguyen, Tran Thi Thao; Hiraki, Yoshiyuki; Toyota, Masahiko; Saigo, Yasumasa; Yoshiura, Takashi; Hirata, Hideki
2016-11-01
This study investigates whether in-room computed tomography (CT)-based adaptive treatment planning (ATP) is robust against interfractional location variations, namely, interfractional organ motions and/or applicator displacements, in 3D intracavitary brachytherapy (ICBT) for uterine cervical cancer. In ATP, the radiation treatment plans, which have been designed based on planning CT images (and/or MR images) acquired just before the treatments, are adaptively applied for each fraction, taking into account the interfractional location variations. 2D and 3D plans with ATP for 14 patients were simulated for 56 fractions at a prescribed dose of 600 cGy per fraction. The standard deviations (SDs) of location displacements (interfractional location variations) of the target and organs at risk (OARs) with 3D ATP were significantly smaller than those with 2D ATP (P < 0.05). The homogeneity index (HI), conformity index (CI) and tumor control probability (TCP) in 3D ATP were significantly higher for high-risk clinical target volumes than those in 2D ATP. The SDs of the HI, CI, TCP, bladder and rectum D 2cc , and the bladder and rectum normal tissue complication probability (NTCP) in 3D ATP were significantly smaller than those in 2D ATP. The results of this study suggest that the interfractional location variations give smaller impacts on the planning evaluation indices in 3D ATP than in 2D ATP. Therefore, the 3D plans with ATP are expected to be robust against interfractional location variations in each treatment fraction. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Plaquing procedure for infectious hematopoietic necrosis virus
Burke, J.A.; Mulcahy, D.
1980-01-01
A single overlay plaque assay was designed and evaluated for infectious hematopoietic necrosis virus. Epithelioma papillosum carpio cells were grown in normal atmosphere with tris(hydroxymethyl)aminomethane- or HEPES (N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid)-buffered media. Plaques were larger and formed more quickly on 1- to 3-day-old cell monolayers than on older monolayers. Cell culture medium with a 10% addition of fetal calf serum (MEM 10) or without serum (MEM 0) were the most efficient virus diluents. Dilution with phosphate-buffered saline, saline, normal broth, or deionized water reduced plaque numbers. Variations in the pH (7.0 to 8.0) of a MEM 0 diluent did not affect plaque numbers. Increasing the volume of viral inoculum above 0.15 ml (15- by 60-mm plate) decreased plaquing efficiency. Significantly more plaques occurred under gum tragacanth and methylcellulose than under agar or agarose overlays. Varying the pH (6.8 to 7.4) of methylcellulose overlays did not significantly change plaque numbers. More plaques formed under the thicker overlays of both methylcellulose and gum tragacanth. Tris(hydroxymethyl)aminomethane and HEPES performed equally well, buffering either medium or overlay. Plaque numbers were reduced when cells were rinsed after virus adsorption or less than 1 h was allowed for adsorption. Variation in adsorption time between 60 and 180 min did not change plaque numbers. The mean plaque formation time was 7 days at 16 degrees C. The viral dose response was linear when the standardized assay was used.
Smooth quantile normalization.
Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada
2018-04-01
Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.
[Dandy-walker syndrome and microdeletions on chromosome 7].
Liao, Can; Fu, Fang; Li, Ru; Pan, Min; Yang, Xin; Yi, Cui-xing; Li, Jian; Li, Dong-zhi
2012-02-01
To investigate genetic etiology of Dandy-Walker syndrome with array-based comparative genomic hybridization (array-CGH). Eight fetuses with Dandy-Walker malformations but normal karyotypes by conventional cytogenetic technique were selected. DNA samples were extracted and hybridized with Affymetrix cytogenetic 2.7 M arrays by following the manufacturer's standard protocol. The data were analyzed by special software packages. By using array-CGH technique, common deletions and duplication on chromosome 7p21.3 were identified in three cases, within which were central nervous system disease associated genes NDUFA4 and PHF14. Copy number variations (CNVs) of chromosome 7p21.3 region are associated with Dandy-Walker malformations which may be due to haploinsufficiency or overexpression of NDUFA4 and PHF14 genes.
Variations in Sexual Behavior.
ERIC Educational Resources Information Center
Juhasz, Anne McCreary
1983-01-01
Questions are raised about the difficulty of defining normal and atypical sexual behavior. Variations from normalcy that students, parents, and educators are most likely to encounter are discussed. The importance of dealing with variations in ways that are best for the individual and the group is emphasized. (PP)
Common Genetic Variant in VIT Is Associated with Human Brain Asymmetry.
Tadayon, Sayed H; Vaziri-Pashkam, Maryam; Kahali, Pegah; Ansari Dezfouli, Mitra; Abbassian, Abdolhossein
2016-01-01
Brain asymmetry varies across individuals. However, genetic factors contributing to this normal variation are largely unknown. Here we studied variation of cortical surface area asymmetry in a large sample of subjects. We performed principal component analysis (PCA) to capture correlated asymmetry variation across cortical regions. We found that caudal and rostral anterior cingulate together account for a substantial part of asymmetry variation among individuals. To find SNPs associated with this subset of brain asymmetry variation we performed a genome-wide association study followed by replication in an independent cohort. We identified one SNP (rs11691187) that had genome-wide significant association (P Combined = 2.40e-08). The rs11691187 is in the first intron of VIT. In a follow-up analysis, we found that VIT gene expression is associated with brain asymmetry in six donors of the Allen Human Brain Atlas. Based on these findings we suggest that VIT contributes to normal brain asymmetry variation. Our results can shed light on disorders associated with altered brain asymmetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neubauer, J.; Wood, E.
2013-05-01
This presentation discusses a method of accounting for realistic levels of driver aggression to higher-level vehicle studies, including the impact of variation in real-world driving characteristics (acceleration and speed) on vehicle energy consumption and different powertrains (e.g., conventionally powered vehicles versus electrified drive vehicles [xEVs]). Aggression variation between drivers can increase fuel consumption by more than 50% or decrease it by more than 20% from average. The normalized fuel consumption deviation from average as a function of population percentile was found to be largely insensitive to powertrain. However, the traits of ideal driving behavior are a function of powertrain. Inmore » conventional vehicles, kinetic losses dominate rolling resistance and aerodynamic losses. In xEVs with regenerative braking, rolling resistance and aerodynamic losses dominate. The relation of fuel consumption predicted from real-world drive data to that predicted by the industry-standard HWFET, UDDS, LA92, and US06 drive cycles was not consistent across powertrains, and varied broadly from the mean, median, and mode of real-world driving. A drive cycle synthesized by NREL's DRIVE tool accurately and consistently reproduces average real-world for multiple powertrains within 1%, and can be used to calculate the fuel consumption effects of varying levels of driver aggression.« less
Multidisciplinary characterization of the long-bone cortex growth patterns through sheep's ontogeny.
Cambra-Moo, Oscar; Nacarino-Meneses, Carmen; Díaz-Güemes, Idoia; Enciso, Silvia; García Gil, Orosia; Llorente Rodríguez, Laura; Rodríguez Barbero, Miguel Ángel; de Aza, Antonio H; González Martín, Armando
2015-07-01
Bone researches have studied extant and extinct taxa extensively trying to disclose a complete view of the complex structural and chemical transformations that model and remodel the macro and microstructure of bone during growth. However, to approach bone growth variations is not an easy task, and many aspects related with histological transformations during ontogeny remain unresolved. In the present study, we conduct a holistic approach using different techniques (polarized microscopy, Raman spectroscopy and X-ray diffraction) to examine the histomorphological and histochemical variations in the cortical bone of sheep specimens from intrauterine to adult stages, using environmentally controlled specimens from the same species. Our results suggest that during sheep bone development, the most important morphological (shape and size) and chemical transformations in the cortical bone occur during the first weeks of life; synchronized but dissimilar variations are established in the forelimb and hind limb cortical bone; and the patterns of bone tissue maturation in both extremities are differentiated in the adult stage. All of these results indicate that standardized histological models are useful not only for evaluating many aspects of normal bone growth but also to understand other important influences on the bones, such as pathologies that remain unknown. Copyright © 2015 Elsevier Inc. All rights reserved.
An activity index for geomagnetic paleosecular variation, excursions, and reversals
NASA Astrophysics Data System (ADS)
Panovska, S.; Constable, C. G.
2017-04-01
Magnetic indices provide quantitative measures of space weather phenomena that are widely used by researchers in geomagnetism. We introduce an index focused on the internally generated field that can be used to evaluate long term variations or climatology of modern and paleomagnetic secular variation, including geomagnetic excursions, polarity reversals, and changes in reversal rate. The paleosecular variation index, Pi, represents instantaneous or average deviation from a geocentric axial dipole field using normalized ratios of virtual geomagnetic pole colatitude and virtual dipole moment. The activity level of the index, σPi, provides a measure of field stability through the temporal standard deviation of Pi. Pi can be calculated on a global grid from geomagnetic field models to reveal large scale geographic variations in field structure. It can be determined for individual time series, or averaged at local, regional, and global scales to detect long term changes in geomagnetic activity, identify excursions, and transitional field behavior. For recent field models, Pi ranges from less than 0.05 to 0.30. Conventional definitions for geomagnetic excursions are characterized by Pi exceeding 0.5. Strong field intensities are associated with low Pi unless they are accompanied by large deviations from axial dipole field directions. σPi provides a measure of geomagnetic stability that is modulated by the level of PSV or frequency of excursional activity and reversal rate. We demonstrate uses of Pi for paleomagnetic observations and field models and show how it could be used to assess whether numerical simulations of the geodynamo exhibit Earth-like properties.
Influence of gestational age and time of day in baseline and heart rate variation of fetuses.
Li, Guangfei; Zhang, Song; Yang, Lin; Li, Shufang; Wang, Yan; Hao, Dongmei; Yang, Yimin; Li, Xuwen; Zhang, Lei; Xu, Mingzhou
2016-04-29
Fetal electrocardiography (FECG) places electrodes on the maternal abdomen to convert the fetal electrocardiosignals into fetal heart rate (FHR), improving the accuracy and comfort of pregnant woman. At the same time, FECG simplifies the procedure of long term monitoring in the perinatal period. Investigating the influence of gestational age and time of day on FHR features to distinguish between non-stress test (NST) normal fetuses and NST suspicious fetuses. A novel method of FHR baseline estimation was presented; then baseline value and fetal heart rate variation (FHRV) were analyzed in the time domain using FHR signals recorded from 52 fetuses. Baseline values in 1:00, 2:00, 4:00, 5:00 and heart rate variation (HRV) distribution showed a significant difference (p< 0.05) between NST normal fetuses and NST suspicious fetuses. The results suggest that NST normal and suspicious fetuses had same outcome and different FHR features. Accurately distinguishing normal fetuses and suspicious fetuses is important for lowering the false positive rate and reducing unnecessary clinical intervention.
Generalized Hurst exponent estimates differentiate EEG signals of healthy and epileptic patients
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2018-01-01
The aim of our current study is to check whether multifractal patterns of the electroencephalographic (EEG) signals of normal and epileptic patients are statistically similar or different. In this regard, the generalized Hurst exponent (GHE) method is used for robust estimation of the multifractals in each type of EEG signals, and three powerful statistical tests are performed to check existence of differences between estimated GHEs from healthy control subjects and epileptic patients. The obtained results show that multifractals exist in both types of EEG signals. Particularly, it was found that the degree of fractal is more pronounced in short variations of normal EEG signals than in short variations of EEG signals with seizure free intervals. In contrary, it is more pronounced in long variations of EEG signals with seizure free intervals than in normal EEG signals. Importantly, both parametric and nonparametric statistical tests show strong evidence that estimated GHEs of normal EEG signals are statistically and significantly different from those with seizure free intervals. Therefore, GHEs can be efficiently used to distinguish between healthy and patients suffering from epilepsy.
The Uniform Pattern of Growth and Skeletal Maturation during the Human Adolescent Growth Spurt.
Sanders, James O; Qiu, Xing; Lu, Xiang; Duren, Dana L; Liu, Raymond W; Dang, Debbie; Menendez, Mariano E; Hans, Sarah D; Weber, David R; Cooperman, Daniel R
2017-12-01
Humans are one of the few species undergoing an adolescent growth spurt. Because children enter the spurt at different ages making age a poor maturity measure, longitudinal studies are necessary to identify the growth patterns and identify commonalities in adolescent growth. The standard maturity determinant, peak height velocity (PHV) timing, is difficult to estimate in individuals due to diurnal, postural, and measurement variation. Using prospective longitudinal populations of healthy children from two North American populations, we compared the timing of the adolescent growth spurt's peak height velocity to normalized heights and hand skeletal maturity radiographs. We found that in healthy children, the adolescent growth spurt is standardized at 90% of final height with similar patterns for children of both sexes beginning at the initiation of the growth spurt. Once children enter the growth spurt, their growth pattern is consistent between children with peak growth at 90% of final height and skeletal maturity closely reflecting growth remaining. This ability to use 90% of final height as easily identified important maturity standard with its close relationship to skeletal maturity represents a significant advance allowing accurate prediction of future growth for individual children and accurate maturity comparisons for future studies of children's growth.
Agner, T
1992-01-01
The aim of the study was to assess the susceptibility of clinically normal skin to a standard irritant trauma under varying physiological and patophysiological conditions. Evaluation of skin responses to patch tests with sodium lauryl sulphate (SLS) was used for assessment of skin susceptibility. The following noninvasive measuring methods were used for evaluation of the skin before and after exposure to irritants: measurement of transepidermal water loss by an evaporimeter, measurement of electrical conductance by a hydrometer, measurement of skin blood flow by laser Doppler flowmetry, measurement of skin colour by a colorimeter and measurement of skin thickness by ultrasound A-scan. The studies were carried out on healthy volunteers and patients with eczema. In the first studies the standard irritant patch test for assessment of skin susceptibility was characterized and validated. SLS was chosen among other irritants because of its ability to penetrate and impair the skin barrier. The implications of use of different qualities of SLS was investigated. The applied noninvasive measuring methods were evaluated, and for quantification of SLS-induced skin damage measurement of TEWL was found to be the most sensitive method. Application of the standard test on clinically normal skin under varying physiological and patophysiological conditions lead to the following main results: Seasonal variation in skin susceptibility to SLS was found, with increased susceptibility in winter, when the hydration state of the stratum corneum was also found to be decreased. A variation in skin reactivity to SLS during the menstrual cycle was demonstrated, with an increased skin response at day 1 as compared to days 9-11 in the menstrual cycle. The presence of active eczema distant from the test site increased skin susceptibility to SLS, indicating a generalized hyperreactivity of the skin. Taking these sources of variation into account healthy volunteers and patients with hand eczema and atopic dermatits were studied and compared. In healthy volunteers increased baseline TEWL and increased light reflection from the skin, interpreted as "fair" skin, was found to be associated with increased susceptibility to SLS. Hand eczema patients were found to have fairer and thinner skin than matched controls. Increased susceptibility to SLS was found only in patients with acute eczema. Patients with atopic dermatitis had increased baseline TEWL as well as increased skin susceptibility as compared to controls. Skin susceptibility is thus influenced by individual- as well as environment-related factors. Knowledge of determinants of skin susceptibility may be useful for the identification of high-risk subjects for development of irritant contact dermatitis, and may help to prevent the formation of the disease.
Marine, Patrick M; Stabin, Michael G; Fernald, Michael J; Brill, Aaron B
2010-05-01
A systematic evaluation has been performed to study how specific absorbed fractions (SAFs) vary with changes in adult body size, for persons of different size but normal body stature. A review of the literature was performed to evaluate how individual organ sizes vary with changes in total body weight of normal-stature individuals. On the basis of this literature review, changes were made to our easily deformable reference adult male and female total-body models. Monte Carlo simulations of radiation transport were performed; SAFs for photons were generated for 10th, 25th, 75th, and 90th percentile adults; and comparisons were made to the reference (50th) percentile SAF values. Differences in SAFs for organs irradiating themselves were between 0.5% and 1.0%/kg difference in body weight, from 15% to 30% overall, for organs within the trunk. Differences in SAFs for organs outside the trunk were not greater than the uncertainties in the data and will not be important enough to change calculated doses. For organs irradiating other organs within the trunk, differences were significant, between 0.3% and 1.1%/kg, or about 8%-33% overall. The differences are interesting and can be used to estimate how different patients' dosimetry might vary from values reported in standard dose tables.
Novel microfluidic device for the continuous separation of cancer cells using dielectrophoresis.
Alazzam, Anas; Mathew, Bobby; Alhammadi, Falah
2017-03-01
We describe the design, microfabrication, and testing of a microfluidic device for the separation of cancer cells based on dielectrophoresis. Cancer cells, specifically green fluorescent protein-labeled MDA-MB-231, are successfully separated from a heterogeneous mixture of the same and normal blood cells. MDA-MB-231 cancer cells are separated with an accuracy that enables precise detection and counting of circulating tumor cells present among normal blood cells. The separation is performed using a set of planar interdigitated transducer electrodes that are deposited on the surface of a glass wafer and slightly protrude into the separation microchannel at one side. The device includes two parts, namely, a glass wafer and polydimethylsiloxane element. The device is fabricated using standard microfabrication techniques. All experiments are conducted with low conductivity sucrose-dextrose isotonic medium. The variation in response between MDA-MB-231 cancer cells and normal cells to a certain band of alternating-current frequencies is used for continuous separation of cells. The fabrication of the microfluidic device, preparation of cells and medium, and flow conditions are detailed. The proposed microdevice can be used to detect and separate malignant cells from heterogeneous mixture of cells for the purpose of early screening for cancer. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd
2018-03-01
An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).
Pouch, Alison M; Vergnat, Mathieu; McGarvey, Jeremy R; Ferrari, Giovanni; Jackson, Benjamin M; Sehgal, Chandra M; Yushkevich, Paul A; Gorman, Robert C; Gorman, Joseph H
2014-01-01
The basis of mitral annuloplasty ring design has progressed from qualitative surgical intuition to experimental and theoretical analysis of annular geometry with quantitative imaging techniques. In this work, we present an automated three-dimensional (3D) echocardiographic image analysis method that can be used to statistically assess variability in normal mitral annular geometry to support advancement in annuloplasty ring design. Three-dimensional patient-specific models of the mitral annulus were automatically generated from 3D echocardiographic images acquired from subjects with normal mitral valve structure and function. Geometric annular measurements including annular circumference, annular height, septolateral diameter, intercommissural width, and the annular height to intercommissural width ratio were automatically calculated. A mean 3D annular contour was computed, and principal component analysis was used to evaluate variability in normal annular shape. The following mean ± standard deviations were obtained from 3D echocardiographic image analysis: annular circumference, 107.0 ± 14.6 mm; annular height, 7.6 ± 2.8 mm; septolateral diameter, 28.5 ± 3.7 mm; intercommissural width, 33.0 ± 5.3 mm; and annular height to intercommissural width ratio, 22.7% ± 6.9%. Principal component analysis indicated that shape variability was primarily related to overall annular size, with more subtle variation in the skewness and height of the anterior annular peak, independent of annular diameter. Patient-specific 3D echocardiographic-based modeling of the human mitral valve enables statistical analysis of physiologically normal mitral annular geometry. The tool can potentially lead to the development of a new generation of annuloplasty rings that restore the diseased mitral valve annulus back to a truly normal geometry. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
40 CFR 190.10 - Standards for normal operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards for the Uranium Fuel Cycle § 190.10 Standards for normal operations. Operations covered by this... radioactive materials, radon and its daughters excepted, to the general environment from uranium fuel cycle... the general environment from the entire uranium fuel cycle, per gigawatt-year of electrical energy...
Dainat, J; Rebière, A
1978-02-15
In the normal and hypothyroid 6-day-old rat, the specific radioactivity (RSA) and the relative RSA (ratio of the RSA to the [3H] lecine concentration of the acido soluble phase) of the cerebral and cerebellar proteins, changes during the day synchronally. They show a maximum at 15.00 h and a minimum at 0.300 h. At all stages studied, these values are significantly lower in the hyothyroid animals than in normal ones.
A Note on the Estimator of the Alpha Coefficient for Standardized Variables Under Normality
ERIC Educational Resources Information Center
Hayashi, Kentaro; Kamata, Akihito
2005-01-01
The asymptotic standard deviation (SD) of the alpha coefficient with standardized variables is derived under normality. The research shows that the SD of the standardized alpha coefficient becomes smaller as the number of examinees and/or items increase. Furthermore, this research shows that the degree of the dependence of the SD on the number of…
NASA Astrophysics Data System (ADS)
Young, Steven K.
As a planet orbits its parent star, the amount of light that reaches Earth from that system is dependent on the dynamics of that star system. Known as photometric variations, these slight changes in light flux are detectable by the Kepler Space Telescope and must be fully understood in order to properly model the system. There are four main factors that contribute to the photometric flux: reflected light from the planet, thermal emissions from the planet, doppler boosting in the light being emitted by the star, and ellipsoidal variations in the star. The total observed flux from each contribution then determines how much light will be seen from the star system to be used for analysis. Previous studies have normalized the photometric variation fluxes by the observed flux emitted from the star. However, normalizing data inherently and unphysically skews the result which must then be taken into account. Additionally, when the stellar flux is an unknown it is impossible to normalize the photometric variation fluxes with respect to it. This paper will preliminarily attempt to improve upon the existing studies by removing the source of the deviation for the flux results, i.e. the stellar flux. The fluxes found from each photometric variation factor will then be incorporated into EXONEST, an algorithm using Bayesian inference, that will be implemented for characterizing extrasolar systems.
Age, gender, and skeletal variation in bone marrow composition: a preliminary study at 3.0 Tesla.
Liney, Gary P; Bernard, Clare P; Manton, David J; Turnbull, Lindsay W; Langton, Chris M
2007-09-01
To evaluate the efficacy of MR Spectroscopy (MRS) at 3.0 Tesla for the assessment of normal bone marrow composition and assess the variation in terms of age, gender, and skeletal site. A total of 16 normal subjects (aged between eight and 57 years) were investigated on a 3.0 Tesla GE Signa system. To investigate axial and peripheral skeleton differences, non-water-suppressed spectra were acquired from single voxels in the calcaneus and lumbar spine. In addition, spectra were acquired at multiple vertebral bodies to assess variation within the lumbar spine. Data was also correlated with bone mineral density (BMD) measured in six subjects using dual-energy X-ray absorptiometry (DXA). Fat content was an order of magnitude greater in the heel compared to the spine. An age-related increase was demonstrated in the spine with values greater in men compared to female subjects. Significant trends in vertebral bodies within the same subjects were also shown, with fat content increasing L5 > L1. Population coefficient of variation (CV) was greater for fat fraction (FF) compared to BMD. Significant normal variations of marrow composition have been demonstrated, which provide important data for the future interpretation of patient investigations. (c) 2007 Wiley-Liss, Inc.
Contour variations of the body and tail of the pancreas: evaluation with MDCT.
Omeri, Ahmad Khalid; Matsumoto, Shunro; Kiyonaga, Maki; Takaji, Ryo; Yamada, Yasunari; Kosen, Kazuhisa; Mori, Hiromu; Miyake, Hidetoshi
2017-06-01
To analyze morphology/contour variations of the pancreatic body and tail in subjects free of pancreatic disease. We retrospectively reviewed triple-phase, contrast-enhanced multi-detector row computed tomography (3P-CE-MDCT) examinations of 449 patients who had no clinical or CT evidence of pancreatic diseases. These patients were evaluated for morphologic/contour variations of the pancreatic body and tail, which were classified into two types. In Type I, a portion of normal pancreatic parenchyma protrudes >1 cm in maximum diameter from the body or tail (Ia-anteriorly; Ib-posteriorly). Type II was defined as a morphologic anomaly of the pancreatic tail (IIa-globular; IIb-lobulated; IIc-tapered; IId-bifid). Thirty-eight (8.5%) out of 449 patients had body or tail variations. Of those, 23 patients showed Type I variant: Ia in 21 and Ib in two. Type II variant was identified in 15 patients: IIa in eight, IIb in two, IIc in two and IId in three. Protrusion of the anterior surface of the normal pancreas, especially in the tail, was the most frequently occurring variant. Recognizing the types and subtypes of morphology/contour variations of the pancreatic body and tail could help prevent misinterpretation of normal variants as pancreatic tumors on unenhanced MDCT.
Boolean Operations with Prism Algebraic Patches
Bajaj, Chandrajit; Paoluzzi, Alberto; Portuesi, Simone; Lei, Na; Zhao, Wenqi
2009-01-01
In this paper we discuss a symbolic-numeric algorithm for Boolean operations, closed in the algebra of curved polyhedra whose boundary is triangulated with algebraic patches (A-patches). This approach uses a linear polyhedron as a first approximation of both the arguments and the result. On each triangle of a boundary representation of such linear approximation, a piecewise cubic algebraic interpolant is built, using a C1-continuous prism algebraic patch (prism A-patch) that interpolates the three triangle vertices, with given normal vectors. The boundary representation only stores the vertices of the initial triangulation and their external vertex normals. In order to represent also flat and/or sharp local features, the corresponding normal-per-face and/or normal-per-edge may be also given, respectively. The topology is described by storing, for each curved triangle, the two triples of pointers to incident vertices and to adjacent triangles. For each triangle, a scaffolding prism is built, produced by its extreme vertices and normals, which provides a containment volume for the curved interpolating A-patch. When looking for the result of a regularized Boolean operation, the 0-set of a tri-variate polynomial within each such prism is generated, and intersected with the analogous 0-sets of the other curved polyhedron, when two prisms have non-empty intersection. The intersection curves of the boundaries are traced and used to decompose each boundary into the 3 standard classes of subpatches, denoted in, out and on. While tracing the intersection curves, the locally refined triangulation of intersecting patches is produced, and added to the boundary representation. PMID:21516262
Cook, Sarah F; Roberts, Jessica K; Samiee-Zafarghandy, Samira; Stockmann, Chris; King, Amber D; Deutsch, Nina; Williams, Elaine F; Allegaert, Karel; Wilkins, Diana G; Sherwin, Catherine M T; van den Anker, John N
2016-01-01
The aims of this study were to develop a population pharmacokinetic model for intravenous paracetamol in preterm and term neonates and to assess the generalizability of the model by testing its predictive performance in an external dataset. Nonlinear mixed-effects models were constructed from paracetamol concentration-time data in NONMEM 7.2. Potential covariates included body weight, gestational age, postnatal age, postmenstrual age, sex, race, total bilirubin, and estimated glomerular filtration rate. An external dataset was used to test the predictive performance of the model through calculation of bias, precision, and normalized prediction distribution errors. The model-building dataset included 260 observations from 35 neonates with a mean gestational age of 33.6 weeks [standard deviation (SD) 6.6]. Data were well-described by a one-compartment model with first-order elimination. Weight predicted paracetamol clearance and volume of distribution, which were estimated as 0.348 L/h (5.5 % relative standard error; 30.8 % coefficient of variation) and 2.46 L (3.5 % relative standard error; 14.3 % coefficient of variation), respectively, at the mean subject weight of 2.30 kg. An external evaluation was performed on an independent dataset that included 436 observations from 60 neonates with a mean gestational age of 35.6 weeks (SD 4.3). The median prediction error was 10.1 % [95 % confidence interval (CI) 6.1-14.3] and the median absolute prediction error was 25.3 % (95 % CI 23.1-28.1). Weight predicted intravenous paracetamol pharmacokinetics in neonates ranging from extreme preterm to full-term gestational status. External evaluation suggested that these findings should be generalizable to other similar patient populations.
Cook, Sarah F.; Roberts, Jessica K.; Samiee-Zafarghandy, Samira; Stockmann, Chris; King, Amber D.; Deutsch, Nina; Williams, Elaine F.; Allegaert, Karel; Sherwin, Catherine M. T.; van den Anker, John N.
2017-01-01
Objectives The aims of this study were to develop a population pharmacokinetic model for intravenous paracetamol in preterm and term neonates and to assess the generalizability of the model by testing its predictive performance in an external dataset. Methods Nonlinear mixed-effects models were constructed from paracetamol concentration–time data in NONMEM 7.2. Potential covariates included body weight, gestational age, postnatal age, postmenstrual age, sex, race, total bilirubin, and estimated glomerular filtration rate. An external dataset was used to test the predictive performance of the model through calculation of bias, precision, and normalized prediction distribution errors. Results The model-building dataset included 260 observations from 35 neonates with a mean gestational age of 33.6 weeks [standard deviation (SD) 6.6]. Data were well-described by a one-compartment model with first-order elimination. Weight predicted paracetamol clearance and volume of distribution, which were estimated as 0.348 L/h (5.5 % relative standard error; 30.8 % coefficient of variation) and 2.46 L (3.5 % relative standard error; 14.3 % coefficient of variation), respectively, at the mean subject weight of 2.30 kg. An external evaluation was performed on an independent dataset that included 436 observations from 60 neonates with a mean gestational age of 35.6 weeks (SD 4.3). The median prediction error was 10.1 % [95 % confidence interval (CI) 6.1–14.3] and the median absolute prediction error was 25.3 % (95 % CI 23.1–28.1). Conclusions Weight predicted intravenous paracetamol pharmacokinetics in neonates ranging from extreme preterm to full-term gestational status. External evaluation suggested that these findings should be generalizable to other similar patient populations. PMID:26201306
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
NASA Astrophysics Data System (ADS)
Rezeau, L.; Belmont, G.; Manuzzo, R.; Aunai, N.; Dargent, J.
2018-01-01
We explore the structure of the magnetopause using a crossing observed by the Magnetospheric Multiscale (MMS) spacecraft on 16 October 2015. Several methods (minimum variance analysis, BV method, and constant velocity analysis) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical, and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyze more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new single spacecraft method, called LNA (local normal analysis) for determining the varying normal, and we compare the results so obtained with those coming from the multispacecraft minimum directional derivative (MDD) tool developed by Shi et al. (2005). This last method gives the dimensionality of the magnetic variations from multipoint measurements and also allows estimating the direction of the local normal when the variations are locally 1-D. This study shows that the magnetopause does include approximate one-dimensional substructures but also two- and three-dimensional structures. It also shows that the dimensionality of the magnetic variations can differ from the variations of other fields so that, at some places, the magnetic field can have a 1-D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. A generalization of the MDD tool is proposed.
On the chaotic diffusion in multidimensional Hamiltonian systems
NASA Astrophysics Data System (ADS)
Cincotta, P. M.; Giordano, C. M.; Martí, J. G.; Beaugé, C.
2018-01-01
We present numerical evidence that diffusion in the herein studied multidimensional near-integrable Hamiltonian systems departs from a normal process, at least for realistic timescales. Therefore, the derivation of a diffusion coefficient from a linear fit on the variance evolution of the unperturbed integrals fails. We review some topics on diffusion in the Arnold Hamiltonian and yield numerical and theoretical arguments to show that in the examples we considered, a standard coefficient would not provide a good estimation of the speed of diffusion. However, numerical experiments concerning diffusion would provide reliable information about the stability of the motion within chaotic regions of the phase space. In this direction, we present an extension of previous results concerning the dynamical structure of the Laplace resonance in Gliese-876 planetary system considering variations of the orbital parameters accordingly to the error introduced by the radial velocity determination. We found that a slight variation of the eccentricity of planet c would destabilize the inner region of the resonance that, though chaotic, shows stable when adopting the best fit values for the parameters.
Goto, Mikako; Yasuoka, Yumi; Nagahama, Hiroyuki; Muto, Jun; Omori, Yasutaka; Ihara, Hayato; Mukai, Takahiro
2017-04-28
A significant increase in atmospheric radon concentration was observed in the area around the epicentre before and after the occurrence of the shallow inland earthquake in the northern Wakayama Prefecture on 5 July 2011 (Mj 5.5, depth 7 km) in Japan. The seismic activity in the sampling site was evaluated to identify that this earthquake was the largest near the sampling site during the observation period. To determine whether this was an anomalous change, the atmospheric daily minimum radon concentration measured for a 13-year period was analysed. When the residual radon concentration values without the seasonal radon variation and the linear trend was > 3 standard deviations of the residual radon variation corresponding to the normal period, the values were deemed as anomalous. As a result, an anomalous increase in radon concentration was determined before and after the earthquake. In conclusion, anomalous change related to earthquakes with at least Mj 5.5 can be detected by monitoring atmospheric radon near the epicentre. © The Author 2016. Published by Oxford University Press.
Crandall-Bear, Aren; Barbour, Andrew J.; Schoenball, Martin; Schoenball, Martin
2018-01-01
At the Salton Sea Geothermal Field (SSGF), strain accumulation is released through seismic slip and aseismic deformation. Earthquake activity at the SSGF often occurs in swarm-like clusters, some with clear migration patterns. We have identified an earthquake sequence composed entirely of focal mechanisms representing an ambiguous style of faulting, where strikes are similar but deformation occurs due to steeply-dipping normal faults with varied stress states. In order to more accurately determine the style of faulting for these events, we revisit the original waveforms and refine estimates of P and S wave arrival times and displacement amplitudes. We calculate the acceptable focal plane solutions using P-wave polarities and S/P amplitude ratios, and determine the preferred fault plane. Without constraints on local variations in stress, found by inverting the full earthquake catalog, it is difficult to explain the occurrence of such events using standard fault-mechanics and friction. Comparing these variations with the expected poroelastic effects from local production and injection of geothermal fluids suggests that anthropogenic activity could affect the style of faulting.
NASA Astrophysics Data System (ADS)
Yu, Huiling; Liang, Hao; Lin, Xue; Zhang, Yizhuo
2018-04-01
A nondestructive methodology is proposed to determine the modulus of elasticity (MOE) of Fraxinus mandschurica samples by using near-infrared (NIR) spectroscopy. The test data consisted of 150 NIR absorption spectra of the wood samples obtained using an NIR spectrometer, with the wavelength range of 900 to 1900 nm. To eliminate the high-frequency noise and the systematic variations on the baseline, Savitzky-Golay convolution combined with standard normal variate and detrending transformation was applied as data pretreated methods. The uninformative variable elimination (UVE), improved by the evolutionary Monte Carlo (EMC) algorithm and successive projections algorithm (SPA) selected three characteristic variables from full 117 variables. The predictive ability of the models was evaluated concerning the root-mean-square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set. In comparison with the predicted results of all the models established in the experiments, UVE-EMC-SPA-LS-SVM presented the best results with the smallest RMSEP of 0.652 and the highest Rp2 of 0.887. Thus, it is feasible to determine the MOE of F. mandschurica using NIR spectroscopy accurately.
Figueredo, Diego de Siqueira; Barbosa, Mayara Rodrigues; Coimbra, Daniel Gomes; Dos Santos, José Luiz Araújo; Costa, Ellyda Fernanda Lopes; Koike, Bruna Del Vechio; Alexandre Moreira, Magna Suzana; de Andrade, Tiago Gomes
2018-03-01
Recent studies have shown that transcriptomes from different tissues present circadian oscillations. Therefore, the endogenous variation of total RNA should be considered as a potential bias in circadian studies of gene expression. However, normalization strategies generally include the equalization of total RNA concentration between samples prior to cDNA synthesis. Moreover, endogenous housekeeping genes (HKGs) frequently used for data normalization may exhibit circadian variation and distort experimental results if not detected or considered. In this study, we controlled experimental conditions from the amount of initial brain tissue samples through extraction steps, cDNA synthesis, and quantitative real time PCR (qPCR) to demonstrate a circadian oscillation of total RNA concentration. We also identified that the normalization of the RNA's yield affected the rhythmic profiles of different genes, including Per1-2 and Bmal1. Five widely used HKGs (Actb, Eif2a, Gapdh, Hprt1, and B2m) also presented rhythmic variations not detected by geNorm algorithm. In addition, the analysis of exogenous microRNAs (Cel-miR-54 and Cel-miR-39) spiked during RNA extraction suggests that the yield was affected by total RNA concentration, which may impact circadian studies of small RNAs. The results indicate that the approach of tissue normalization without total RNA equalization prior to cDNA synthesis can avoid bias from endogenous broad variations in transcript levels. Also, the circadian analysis of 2 -Cycle threshold (Ct) data, without HKGs, may be an alternative for chronobiological studies under controlled experimental conditions.
Analysing the magnetopause internal structure: new possibilities offered by MMS
NASA Astrophysics Data System (ADS)
Belmont, G.; Rezeau, L.; Manuzzo, R.; Aunai, N.; Dargent, J.
2017-12-01
We explore the structure of the magnetopause using a crossing observed by the MMS spacecraft on October 16th, 2015. Several methods (MVA, BV, CVA) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyse more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new method, called LNA (Local Normal Analysis) for determining the varying normal, and we compare the results so obtained with those coming from the MDD tool developed by [Shi et al., 2005]. This method gives the dimensionality of the magnetic variations from multi-point measurements and allows estimating the direction of the local normal using the magnetic field. On the other hand, LNA is a single-spacecraft method which gives the local normal from the magnetic field and particle data. This study shows that the magnetopause does include approximate one-dimensional sub-structures but also two and three dimensional intervals. It also shows that the dimensionality of the magnetic variations can differ from the variations of the other fields so that, at some places, the magnetic field can have a 1D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. Finally a generalisation and a systematic application of the MDD method to the physical quantities of interest is shown.
NASA Astrophysics Data System (ADS)
Duraipandian, Shiyamala; Zheng, Wei; Ng, Joseph; Low, Jeffrey J. H.; Ilancheran, A.; Huang, Zhiwei
2012-03-01
Raman spectroscopy is a unique analytical probe for molecular vibration and is capable of providing specific spectroscopic fingerprints of molecular compositions and structures of biological tissues. The aim of this study is to improve the classification accuracy of cervical precancer by characterizing the variations in the normal high wavenumber (HW - 2800-3700cm-1) Raman spectra arising from the menopausal status of the cervix. A rapidacquisition near-infrared (NIR) Raman spectroscopic system was used for in vivo tissue Raman measurements at 785 nm excitation. Individual HW Raman spectrum was measured with a 5s exposure time from both normal and precancer tissue sites of 15 patients recruited. The acquired Raman spectra were stratified based on the menopausal status of the cervix before the data analysis. Significant differences were noticed in Raman intensities of prominent band at 2924 cm-1 (CH3 stretching of proteins) and the broad water Raman band (in the 3100-3700 cm-1 range) with a peak at 3390 cm-1 in normal and dysplasia cervical tissue sites. Multivariate diagnostic decision algorithm based on principal component analysis (PCA) and linear discriminant analysis (LDA) was utilized to successfully differentiate the normal and precancer cervical tissue sites. By considering the variations in the Raman spectra of normal cervix due to the hormonal or menopausal status of women, the diagnostic accuracy was improved from 71 to 91%. By incorporating these variations prior to tissue classification, we can significantly improve the accuracy of cervical precancer detection using HW Raman spectroscopy.
Qualitative analysis of mycotoxins using micellar electrokinetic capillary chromatography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, R.D.; Sepaniak, M.J.
1993-05-01
Naturally occurring mycotoxins are separated using micellar electrokinetic capillary chromatography. Trends in the retention of these toxins, resulting from changes in mobile-phase composition and pH, are reported and presented as a means of alleviating coelution problems. Two sets of mobile-phase conditions are determined that provide unique separation selectivity. The facile manner by which mobile-phase conditions can be altered, without changes in instrumental configuration, allowed the acquisition of two distinctive, fully resolved chromatograms of 10 mycotoxins in a period of approximately 45 min. By adjusting retention times, using indigenous or added components in mycotoxin samples as normalization standards, it is possiblemore » to obtain coefficients of variation in retention time that average less than 1%. The qualitative capabilities of this methodology are evaluated by separating randomly generated mycotoxin-interferent mixtures. In this study, the utilization of normalized retention times applied to separations obtained with two sets of mobile-phase conditions permitted the identification of all the mycotoxins in five unknown samples without any misidentifications. 24 refs., 3 figs., 2 tabs.« less
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
Magenes, G; Bellazzi, R; Malovini, A; Signorini, M G
2016-08-01
The onset of fetal pathologies can be screened during pregnancy by means of Fetal Heart Rate (FHR) monitoring and analysis. Noticeable advances in understanding FHR variations were obtained in the last twenty years, thanks to the introduction of quantitative indices extracted from the FHR signal. This study searches for discriminating Normal and Intra Uterine Growth Restricted (IUGR) fetuses by applying data mining techniques to FHR parameters, obtained from recordings in a population of 122 fetuses (61 healthy and 61 IUGRs), through standard CTG non-stress test. We computed N=12 indices (N=4 related to time domain FHR analysis, N=4 to frequency domain and N=4 to non-linear analysis) and normalized them with respect to the gestational week. We compared, through a 10-fold crossvalidation procedure, 15 data mining techniques in order to select the more reliable approach for identifying IUGR fetuses. The results of this comparison highlight that two techniques (Random Forest and Logistic Regression) show the best classification accuracy and that both outperform the best single parameter in terms of mean AUROC on the test sets.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the Contract Work Hours and Safety Standards Act. 5.15 Section 5.15 Labor Office of the Secretary of... WORK HOURS AND SAFETY STANDARDS ACT) Davis-Bacon and Related Acts Provisions and Procedures § 5.15 Limitations, variations, tolerances, and exemptions under the Contract Work Hours and Safety Standards Act. (a...
Patient-specific dose estimation for pediatric chest CT
Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Frush, Donald P.
2008-01-01
Current methods for organ and effective dose estimations in pediatric CT are largely patient generic. Physical phantoms and computer models have only been developed for standard/limited patient sizes at discrete ages (e.g., 0, 1, 5, 10, 15years old) and do not reflect the variability of patient anatomy and body habitus within the same size/age group. In this investigation, full-body computer models of seven pediatric patients in the same size/protocol group (weight: 11.9–18.2kg) were created based on the patients’ actual multi-detector array CT (MDCT) data. Organs and structures in the scan coverage were individually segmented. Other organs and structures were created by morphing existing adult models (developed from visible human data) to match the framework defined by the segmented organs, referencing the organ volume and anthropometry data in ICRP Publication 89. Organ and effective dose of these patients from a chest MDCT scan protocol (64 slice LightSpeed VCT scanner, 120kVp, 70 or 75mA, 0.4s gantry rotation period, pitch of 1.375, 20mm beam collimation, and small body scan field-of-view) was calculated using a Monte Carlo program previously developed and validated to simulate radiation transport in the same CT system. The seven patients had normalized effective dose of 3.7–5.3mSv∕100mAs (coefficient of variation: 10.8%). Normalized lung dose and heart dose were 10.4–12.6mGy∕100mAs and 11.2–13.3mGy∕100mAs, respectively. Organ dose variations across the patients were generally small for large organs in the scan coverage (<7%), but large for small organs in the scan coverage (9%–18%) and for partially or indirectly exposed organs (11%–77%). Normalized effective dose correlated weakly with body weight (correlation coefficient:r=−0.80). Normalized lung dose and heart dose correlated strongly with mid-chest equivalent diameter (lung: r=−0.99, heart: r=−0.93); these strong correlation relationships can be used to estimate patient-specific organ dose for any other patient in the same size/protocol group who undergoes the chest scan. In summary, this work reported the first assessment of dose variations across pediatric CT patients in the same size/protocol group due to the variability of patient anatomy and body habitus and provided a previously unavailable method for patient-specific organ dose estimation, which will help in assessing patient risk and optimizing dose reduction strategies, including the development of scan protocols. PMID:19175138
The application of the sinusoidal model to lung cancer patient respiratory motion
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, R.; Vedam, S.S.; Chung, T.D.
2005-09-15
Accurate modeling of the respiratory cycle is important to account for the effect of organ motion on dose calculation for lung cancer patients. The aim of this study is to evaluate the accuracy of a respiratory model for lung cancer patients. Lujan et al. [Med. Phys. 26(5), 715-720 (1999)] proposed a model, which became widely used, to describe organ motion due to respiration. This model assumes that the parameters do not vary between and within breathing cycles. In this study, first, the correlation of respiratory motion traces with the model f(t) as a function of the parameter n(n=1,2,3) was undertakenmore » for each breathing cycle from 331 four-minute respiratory traces acquired from 24 lung cancer patients using three breathing types: free breathing, audio instruction, and audio-visual biofeedback. Because cos{sup 2} and cos{sup 4} had similar correlation coefficients, and cos{sup 2} and cos{sup 1} have a trigonometric relationship, for simplicity, the cos{sup 1} value was consequently used for further analysis in which the variations in mean position (z{sub 0}), amplitude of motion (b) and period ({tau}) with and without biofeedback or instructions were investigated. For all breathing types, the parameter values, mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) exhibited significant cycle-to-cycle variations. Audio-visual biofeedback showed the least variations for all three parameters (z{sub 0}, b, and {tau}). It was found that mean position (z{sub 0}) could be approximated with a normal distribution, and the amplitude of motion (b) and period ({tau}) could be approximated with log normal distributions. The overall probability density function (pdf) of f(t) for each of the three breathing types was fitted with three models: normal, bimodal, and the pdf of a simple harmonic oscillator. It was found that the normal and the bimodal models represented the overall respiratory motion pdfs with correlation values from 0.95 to 0.99, whereas the range of the simple harmonic oscillator pdf correlation values was 0.71 to 0.81. This study demonstrates that the pdfs of mean position (z{sub 0}), amplitude of motion (b), and period ({tau}) can be used for sampling to obtain more realistic respiratory traces. The overall standard deviations of respiratory motion were 0.48, 0.57, and 0.55 cm for free breathing, audio instruction, and audio-visual biofeedback, respectively.« less
Patient-specific dose estimation for pediatric chest CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Xiang; Samei, Ehsan; Segars, W. Paul
2008-12-15
Current methods for organ and effective dose estimations in pediatric CT are largely patient generic. Physical phantoms and computer models have only been developed for standard/limited patient sizes at discrete ages (e.g., 0, 1, 5, 10, 15 years old) and do not reflect the variability of patient anatomy and body habitus within the same size/age group. In this investigation, full-body computer models of seven pediatric patients in the same size/protocol group (weight: 11.9-18.2 kg) were created based on the patients' actual multi-detector array CT (MDCT) data. Organs and structures in the scan coverage were individually segmented. Other organs and structuresmore » were created by morphing existing adult models (developed from visible human data) to match the framework defined by the segmented organs, referencing the organ volume and anthropometry data in ICRP Publication 89. Organ and effective dose of these patients from a chest MDCT scan protocol (64 slice LightSpeed VCT scanner, 120 kVp, 70 or 75 mA, 0.4 s gantry rotation period, pitch of 1.375, 20 mm beam collimation, and small body scan field-of-view) was calculated using a Monte Carlo program previously developed and validated to simulate radiation transport in the same CT system. The seven patients had normalized effective dose of 3.7-5.3 mSv/100 mAs (coefficient of variation: 10.8%). Normalized lung dose and heart dose were 10.4-12.6 mGy/100 mAs and 11.2-13.3 mGy/100 mAs, respectively. Organ dose variations across the patients were generally small for large organs in the scan coverage (<7%), but large for small organs in the scan coverage (9%-18%) and for partially or indirectly exposed organs (11%-77%). Normalized effective dose correlated weakly with body weight (correlation coefficient: r=-0.80). Normalized lung dose and heart dose correlated strongly with mid-chest equivalent diameter (lung: r=-0.99, heart: r=-0.93); these strong correlation relationships can be used to estimate patient-specific organ dose for any other patient in the same size/protocol group who undergoes the chest scan. In summary, this work reported the first assessment of dose variations across pediatric CT patients in the same size/protocol group due to the variability of patient anatomy and body habitus and provided a previously unavailable method for patient-specific organ dose estimation, which will help in assessing patient risk and optimizing dose reduction strategies, including the development of scan protocols.« less
Ferrero, Alejandro; Rabal, Ana María; Campos, Joaquín; Pons, Alicia; Hernanz, María Luisa
2012-12-20
A study on the variation of the spectral bidirectional reflectance distribution function (BRDF) of four diffuse reflectance standards (matte ceramic, BaSO(4), Spectralon, and white Russian opal glass) is accomplished through this work. Spectral BRDF measurements were carried out and, using principal components analysis, its spectral and geometrical variation respect to a reference geometry was assessed from the experimental data. Several descriptors were defined in order to compare the spectral BRDF variation of the four materials.
Apparent migration of implantable port devices: normal variations in consideration of BMI.
Wyschkon, Sebastian; Löschmann, Jan-Phillip; Scheurig-Münkler, Christian; Nagel, Sebastian; Hamm, Bernd; Elgeti, Thomas
2016-01-01
To evaluate the extent of normal variation in implantable port devices between supine fluoroscopy and upright chest x-ray in relation to body mass index (BMI) based on three different measurement methods. Retrospectively, 80 patients with implanted central venous access port systems from 2012-01-01 until 2013-12-31 were analyzed. Three parameters (two quantitative and one semi-quantitative) were determined to assess port positions: projection of port capsule to anterior ribs (PCP) and intercostal spaces, ratio of extra- and intravascular catheter portions (EX/IV), normalized distance of catheter tip to carina (nCTCD). Changes were analyzed for males and females and normal-weight and overweight patients using analysis of variance with Bonferroni-corrected pairwise comparison. PCP revealed significantly greater changes in chest x-rays in overweight women than in the other groups (p<0.001, F-test). EX/IV showed a significantly higher increase in overweight women than normal-weight women and men and overweight men (p<0.001). nCTCD showed a significantly greater increase in overweight women than overweight men (p = 0.0130). There were no significant differences between the other groups. Inter- and intra-observer reproducibility was high (Cronbach alpha of 0.923-1.0) and best for EX/IV. Central venous port systems show wide normal variations in the projection of catheter tip and port capsule. In overweight women apparent catheter migration is significantly greater compared with normal-weight women and with men. The measurement of EX/IV and PCP are straightforward methods, quick to perform, and show higher reproducibility than measurement of catheter tip-to-carina distance.
Anatomical Variations of the Circulus Arteriosus in Cadaveric Human Brains
Gunnal, S. A.; Farooqui, M. S.; Wabale, R. N.
2014-01-01
Objective. Circulus arteriosus/circle of Willis (CW) is a polygonal anastomotic channel at the base of the brain which unites the internal carotid and vertebrobasilar system. It maintains the steady and constant supply to the brain. The variations of CW are seen often. The Aim of the present work is to find out the percentage of normal pattern of CW, and the frequency of variations of the CW and to study the morphological and morphometric aspects of all components of CW. Methods. Circulus arteriosus of 150 formalin preserved brains were dissected. Dimensions of all the components forming circles were measured. Variations of all the segments were noted and well photographed. The variations such as aplasia, hypoplasia, duplication, fenestrations, and difference in dimensions with opposite segments were noted. The data collected in the study was analyzed. Results. Twenty-one different types of CW were found in the present study. Normal and complete CW was found in 60%. CW with gross morphological variations was seen in 40%. Maximum variations were seen in the PCoA followed by the ACoA in 50% and 40%, respectively. Conclusion. As it confirms high percentage of variations, all surgical interventions should be preceded by angiography. Awareness of these anatomical variations is important in neurovascular procedures. PMID:24891951
Mirzaei, Hamid; Brusniak, Mi-Youn; Mueller, Lukas N; Letarte, Simon; Watts, Julian D; Aebersold, Ruedi
2009-08-01
As the application for quantitative proteomics in the life sciences has grown in recent years, so has the need for more robust and generally applicable methods for quality control and calibration. The reliability of quantitative proteomics is tightly linked to the reproducibility and stability of the analytical platforms, which are typically multicomponent (e.g. sample preparation, multistep separations, and mass spectrometry) with individual components contributing unequally to the overall system reproducibility. Variations in quantitative accuracy are thus inevitable, and quality control and calibration become essential for the assessment of the quality of the analyses themselves. Toward this end, the use of internal standards cannot only assist in the detection and removal of outlier data acquired by an irreproducible system (quality control) but can also be used for detection of changes in instruments for their subsequent performance and calibration. Here we introduce a set of halogenated peptides as internal standards. The peptides are custom designed to have properties suitable for various quality control assessments, data calibration, and normalization processes. The unique isotope distribution of halogenated peptides makes their mass spectral detection easy and unambiguous when spiked into complex peptide mixtures. In addition, they were designed to elute sequentially over an entire aqueous to organic LC gradient and to have m/z values within the commonly scanned mass range (300-1800 Da). In a series of experiments in which these peptides were spiked into an enriched N-glycosite peptide fraction (i.e. from formerly N-glycosylated intact proteins in their deglycosylated form) isolated from human plasma, we show the utility and performance of these halogenated peptides for sample preparation and LC injection quality control as well as for retention time and mass calibration. Further use of the peptides for signal intensity normalization and retention time synchronization for selected reaction monitoring experiments is also demonstrated.
[Quantification of acetabular coverage in normal adult].
Lin, R M; Yang, C Y; Yu, C Y; Yang, C R; Chang, G L; Chou, Y L
1991-03-01
Quantification of acetabular coverage is important and can be expressed by superimposition of cartilage tracings on the maximum cross-sectional area of the femoral head. A practical Autolisp program on PC AutoCAD has been developed by us to quantify the acetabular coverage through numerical expression of the images of computed tomography. Thirty adults (60 hips) with normal center-edge angle and acetabular index in plain X ray were randomly selected for serial drops. These slices were prepared with a fixed coordination and in continuous sections of 5 mm in thickness. The contours of the cartilage of each section were digitized into a PC computer and processed by AutoCAD programs to quantify and characterize the acetabular coverage of normal and dysplastic adult hips. We found that a total coverage ratio of greater than 80%, an anterior coverage ratio of greater than 75% and a posterior coverage ratio of greater than 80% can be categorized in a normal group. Polar edge distance is a good indicator for the evaluation of preoperative and postoperative coverage conditions. For standardization and evaluation of acetabular coverage, the most suitable parameters are the total coverage ratio, anterior coverage ratio, posterior coverage ratio and polar edge distance. However, medial coverage and lateral coverage ratios are indispensable in cases of dysplastic hip because variations between them are so great that acetabuloplasty may be impossible. This program can also be used to classify precisely the type of dysplastic hip.
Molenaar, Peter C M
2008-01-01
It is argued that general mathematical-statistical theorems imply that standard statistical analysis techniques of inter-individual variation are invalid to investigate developmental processes. Developmental processes have to be analyzed at the level of individual subjects, using time series data characterizing the patterns of intra-individual variation. It is shown that standard statistical techniques based on the analysis of inter-individual variation appear to be insensitive to the presence of arbitrary large degrees of inter-individual heterogeneity in the population. An important class of nonlinear epigenetic models of neural growth is described which can explain the occurrence of such heterogeneity in brain structures and behavior. Links with models of developmental instability are discussed. A simulation study based on a chaotic growth model illustrates the invalidity of standard analysis of inter-individual variation, whereas time series analysis of intra-individual variation is able to recover the true state of affairs. (c) 2007 Wiley Periodicals, Inc.
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
Cross-sectional imaging of congenital and acquired abnormalities of the portal venous system
Özbayrak, Mustafa; Tatlı, Servet
2016-01-01
Knowing the normal anatomy, variations, congenital and acquired pathologies of the portal venous system are important, especially when planning liver surgery and percutaneous interventional procedures. The portal venous system pathologies can be congenital such as agenesis of portal vein (PV) or can be involved by other hepatic disorders such as cirrhosis and malignancies. In this article, we present normal anatomy, variations, and acquired pathologies involving the portal venous system as seen on computed tomography (CT) and magnetic resonance imaging (MRI). PMID:27731302
NASA Astrophysics Data System (ADS)
Oshtrakh, M. I.; Alenkina, I. V.; Vinogradov, A. V.; Konstantinova, T. S.; Semionkin, V. A.
2015-04-01
Study of human spleen and liver tissues from healthy persons and two patients with mantle cell lymphoma and acute myeloid leukemia was carried out using Mössbauer spectroscopy with a high velocity resolution. Small variations in the 57Fe hyperfine parameters for normal and patient's tissues were detected and related to small variations in the 57Fe local microenvironment in ferrihydrite cores. The differences in the relative parts of more crystalline and more amorphous core regions were also supposed for iron storage proteins in normal and patients' spleen and liver tissues.
Stochastic Geometric Network Models for Groups of Functional and Structural Connectomes
Friedman, Eric J.; Landsberg, Adam S.; Owen, Julia P.; Li, Yi-Ou; Mukherjee, Pratik
2014-01-01
Structural and functional connectomes are emerging as important instruments in the study of normal brain function and in the development of new biomarkers for a variety of brain disorders. In contrast to single-network studies that presently dominate the (non-connectome) network literature, connectome analyses typically examine groups of empirical networks and then compare these against standard (stochastic) network models. Current practice in connectome studies is to employ stochastic network models derived from social science and engineering contexts as the basis for the comparison. However, these are not necessarily best suited for the analysis of connectomes, which often contain groups of very closely related networks, such as occurs with a set of controls or a set of patients with a specific disorder. This paper studies important extensions of standard stochastic models that make them better adapted for analysis of connectomes, and develops new statistical fitting methodologies that account for inter-subject variations. The extensions explicitly incorporate geometric information about a network based on distances and inter/intra hemispherical asymmetries (to supplement ordinary degree-distribution information), and utilize a stochastic choice of networks' density levels (for fixed threshold networks) to better capture the variance in average connectivity among subjects. The new statistical tools introduced here allow one to compare groups of networks by matching both their average characteristics and the variations among them. A notable finding is that connectomes have high “smallworldness” beyond that arising from geometric and degree considerations alone. PMID:25067815
Khan, Jenna; Lieberman, Joshua A; Lockwood, Christina M
2017-05-01
microRNAs (miRNAs) hold promise as biomarkers for a variety of disease processes and for determining cell differentiation. These short RNA species are robust, survive harsh treatment and storage conditions and may be extracted from blood and tissue. Pre-analytical variables are critical confounders in the analysis of miRNAs: we elucidate these and identify best practices for minimizing sample variation in blood and tissue specimens. Pre-analytical variables addressed include patient-intrinsic variation, time and temperature from sample collection to storage or processing, processing methods, contamination by cells and blood components, RNA extraction method, normalization, and storage time/conditions. For circulating miRNAs, hemolysis and blood cell contamination significantly affect profiles; samples should be processed within 2 h of collection; ethylene diamine tetraacetic acid (EDTA) is preferred while heparin should be avoided; samples should be "double spun" or filtered; room temperature or 4 °C storage for up to 24 h is preferred; miRNAs are stable for at least 1 year at -20 °C or -80 °C. For tissue-based analysis, warm ischemic time should be <1 h; cold ischemic time (4 °C) <24 h; common fixative used for all specimens; formalin fix up to 72 h prior to processing; enrich for cells of interest; validate candidate biomarkers with in situ visualization. Most importantly, all specimen types should have standard and common workflows with careful documentation of relevant pre-analytical variables.
Lognormal Kalman filter for assimilating phase space density data in the radiation belts
NASA Astrophysics Data System (ADS)
Kondrashov, D.; Ghil, M.; Shprits, Y.
2011-11-01
Data assimilation combines a physical model with sparse observations and has become an increasingly important tool for scientists and engineers in the design, operation, and use of satellites and other high-technology systems in the near-Earth space environment. Of particular importance is predicting fluxes of high-energy particles in the Van Allen radiation belts, since these fluxes can damage spaceborne platforms and instruments during strong geomagnetic storms. In transiting from a research setting to operational prediction of these fluxes, improved data assimilation is of the essence. The present study is motivated by the fact that phase space densities (PSDs) of high-energy electrons in the outer radiation belt—both simulated and observed—are subject to spatiotemporal variations that span several orders of magnitude. Standard data assimilation methods that are based on least squares minimization of normally distributed errors may not be adequate for handling the range of these variations. We propose herein a modification of Kalman filtering that uses a log-transformed, one-dimensional radial diffusion model for the PSDs and includes parameterized losses. The proposed methodology is first verified on model-simulated, synthetic data and then applied to actual satellite measurements. When the model errors are sufficiently smaller then observational errors, our methodology can significantly improve analysis and prediction skill for the PSDs compared to those of the standard Kalman filter formulation. This improvement is documented by monitoring the variance of the innovation sequence.
Automated lung volumetry from routine thoracic CT scans: how reliable is the result?
Haas, Matthias; Hamm, Bernd; Niehues, Stefan M
2014-05-01
Today, lung volumes can be easily calculated from chest computed tomography (CT) scans. Modern postprocessing workstations allow automated volume measurement of data sets acquired. However, there are challenges in the use of lung volume as an indicator of pulmonary disease when it is obtained from routine CT. Intra-individual variation and methodologic aspects have to be considered. Our goal was to assess the reliability of volumetric measurements in routine CT lung scans. Forty adult cancer patients whose lungs were unaffected by the disease underwent routine chest CT scans in 3-month intervals, resulting in a total number of 302 chest CT scans. Lung volume was calculated by automatic volumetry software. On average of 7.2 CT scans were successfully evaluable per patient (range 2-15). Intra-individual changes were assessed. In the set of patients investigated, lung volume was approximately normally distributed, with a mean of 5283 cm(3) (standard deviation = 947 cm(3), skewness = -0.34, and curtosis = 0.16). Between different scans in one and the same patient the median intra-individual standard deviation in lung volume was 853 cm(3) (16% of the mean lung volume). Automatic lung segmentation of routine chest CT scans allows a technically stable estimation of lung volume. However, substantial intra-individual variations have to be considered. A median intra-individual deviation of 16% in lung volume between different routine scans was found. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Variation of gene expression in Bacillus subtilis samples of fermentation replicates.
Zhou, Ying; Yu, Wen-Bang; Ye, Bang-Ce
2011-06-01
The application of comprehensive gene expression profiling technologies to compare wild and mutated microorganism samples or to assess molecular differences between various treatments has been widely used. However, little is known about the normal variation of gene expression in microorganisms. In this study, an Agilent customized microarray representing 4,106 genes was used to quantify transcript levels of five-repeated flasks to assess normal variation in Bacillus subtilis gene expression. CV analysis and analysis of variance were employed to investigate the normal variance of genes and the components of variance, respectively. The results showed that above 80% of the total variation was caused by biological variance. For the 12 replicates, 451 of 4,106 genes exhibited variance with CV values over 10%. The functional category enrichment analysis demonstrated that these variable genes were mainly involved in cell type differentiation, cell type localization, cell cycle and DNA processing, and spore or cyst coat. Using power analysis, the minimal biological replicate number for a B. subtilis microarray experiment was determined to be six. The results contribute to the definition of the baseline level of variability in B. subtilis gene expression and emphasize the importance of replicate microarray experiments.
A new ionospheric storm scale based on TEC and foF2 statistics
NASA Astrophysics Data System (ADS)
Nishioka, Michi; Tsugawa, Takuya; Jin, Hidekatsu; Ishii, Mamoru
2017-01-01
In this paper, we propose the I-scale, a new ionospheric storm scale for general users in various regions in the world. With the I-scale, ionospheric storms can be classified at any season, local time, and location. Since the ionospheric condition largely depends on many factors such as solar irradiance, energy input from the magnetosphere, and lower atmospheric activity, it had been difficult to scale ionospheric storms, which are mainly caused by solar and geomagnetic activities. In this study, statistical analysis was carried out for total electron content (TEC) and F2 layer critical frequency (foF2) in Japan for 18 years from 1997 to 2014. Seasonal, local time, and latitudinal dependences of TEC and foF2 variabilities are excluded by normalizing each percentage variation using their statistical standard deviations. The I-scale is defined by setting thresholds to the normalized numbers to seven categories: I0, IP1, IP2, IP3, IN1, IN2, and IN3. I0 represents a quiet state, and IP1 (IN1), IP2 (IN2), and IP3 (IN3) represent moderate, strong, and severe positive (negative) storms, respectively. The proposed I-scale can be used for other locations, such as polar and equatorial regions. It is considered that the proposed I-scale can be a standardized scale to help the users to assess the impact of space weather on their systems.
Horsky, Monika; Irrgeher, Johanna; Prohaska, Thomas
2016-01-01
This paper critically reviews the state-of-the-art of isotope amount ratio measurements by solution-based multi-collector inductively coupled plasma mass spectrometry (MC ICP-MS) and presents guidelines for corresponding data reduction strategies and uncertainty assessments based on the example of n((87)Sr)/n((86)Sr) isotope ratios. This ratio shows variation attributable to natural radiogenic processes and mass-dependent fractionation. The applied calibration strategies can display these differences. In addition, a proper statement of uncertainty of measurement, including all relevant influence quantities, is a metrological prerequisite. A detailed instructive procedure for the calculation of combined uncertainties is presented for Sr isotope amount ratios using three different strategies of correction for instrumental isotopic fractionation (IIF): traditional internal correction, standard-sample bracketing, and a combination of both, using Zr as internal standard. Uncertainties are quantified by means of a Kragten spreadsheet approach, including the consideration of correlations between individual input parameters to the model equation. The resulting uncertainties are compared with uncertainties obtained from the partial derivatives approach and Monte Carlo propagation of distributions. We obtain relative expanded uncertainties (U rel; k = 2) of n((87)Sr)/n((86)Sr) of < 0.03 %, when normalization values are not propagated. A comprehensive propagation, including certified values and the internal normalization ratio in nature, increases relative expanded uncertainties by about factor two and the correction for IIF becomes the major contributor.
Extrahepatic arteries of the human liver - anatomical variants and surgical relevancies.
Németh, Károly; Deshpande, Rahul; Máthé, Zoltán; Szuák, András; Kiss, Mátyás; Korom, Csaba; Nemeskéri, Ágnes; Kóbori, László
2015-10-01
The purpose of our study was to investigate the anatomical variations of the extrahepatic arterial structures of the liver with particular attention to rare variations and their potential impact on liver surgery. A total of 50 human abdominal organ complexes were used to prepare corrosion casts. A multicomponent resin mixture was injected into the abdominal aorta. The portal vein was injected with a different colored resin in 16 cases. Digestion of soft tissues was achieved using cc. KOH solution at 60-65 °C. Extrahepatic arterial variations were classified according to Michels. All specimens underwent 3D volumetric CT reconstruction. Normal anatomy was seen in 42% of cases, and variants were seen in the other 58%. No Michels type VI or X variations were present; however, in 18% of cases the extrahepatic arterial anatomy did not fit into Michels' classification. We report four new extrahepatic arterial variations. In contrast to the available data, normal anatomy was found much less frequently, whereas the prevalence of unclassified arterial variations was higher. We detected four previously unknown variations. Our data may contribute to the reduction of complications during surgical and radiological interventions in the upper abdomen. © 2015 Steunstichting ESOT.
The normalization of deviance in healthcare delivery
Banja, John
2009-01-01
Many serious medical errors result from violations of recognized standards of practice. Over time, even egregious violations of standards of practice may become “normalized” in healthcare delivery systems. This article describes what leads to this normalization and explains why flagrant practice deviations can persist for years, despite the importance of the standards at issue. This article also provides recommendations to aid healthcare organizations in identifying and managing unsafe practice deviations before they become normalized and pose genuine risks to patient safety, quality care, and employee morale. PMID:20161685
Determination of total phenolic compounds in compost by infrared spectroscopy.
Cascant, M M; Sisouane, M; Tahiri, S; Krati, M El; Cervera, M L; Garrigues, S; de la Guardia, M
2016-06-01
Middle and near infrared (MIR and NIR) were applied to determine the total phenolic compounds (TPC) content in compost samples based on models built by using partial least squares (PLS) regression. The multiplicative scatter correction, standard normal variate and first derivative were employed as spectra pretreatment, and the number of latent variable were optimized by leave-one-out cross-validation. The performance of PLS-ATR-MIR and PLS-DR-NIR models was evaluated according to root mean square error of cross validation and prediction (RMSECV and RMSEP), the coefficient of determination for prediction (Rpred(2)) and residual predictive deviation (RPD) being obtained for this latter values of 5.83 and 8.26 for MIR and NIR, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Isotopic Compositions of the Elements, 2001
NASA Astrophysics Data System (ADS)
Böhlke, J. K.; de Laeter, J. R.; De Bièvre, P.; Hidaka, H.; Peiser, H. S.; Rosman, K. J. R.; Taylor, P. D. P.
2005-03-01
The Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry completed its last review of the isotopic compositions of the elements as determined by isotope-ratio mass spectrometry in 2001. That review involved a critical evaluation of the published literature, element by element, and forms the basis of the table of the isotopic compositions of the elements (TICE) presented here. For each element, TICE includes evaluated data from the "best measurement" of the isotope abundances in a single sample, along with a set of representative isotope abundances and uncertainties that accommodate known variations in normal terrestrial materials. The representative isotope abundances and uncertainties generally are consistent with the standard atomic weight of the element Ar(E) and its uncertainty U[Ar(E)] recommended by CAWIA in 2001.
Analysis of communication in the standard versus automated aircraft
NASA Technical Reports Server (NTRS)
Veinott, Elizabeth S.; Irwin, Cheryl M.
1993-01-01
Past research has shown crew communication patterns to be associated with overall crew performance, recent flight experience together, low-and high-error crew performance and personality variables. However, differences in communication patterns as a function of aircraft type and level of aircraft automation have not been fully addressed. Crew communications from ten MD-88 and twelve DC-9 crews were obtained during a full-mission simulation. In addition to large differences in overall amount of communication during the normal and abnormal phases of flight (DC-9 crews generating less speech than MD-88 crews), differences in specific speech categories were also found. Log-linear analyses also generated speaker-response patterns related to each aircraft type, although in future analyses these patterns will need to account for variations due to crew performance.
Emergent dynamics of spiking neurons with fluctuating threshold
NASA Astrophysics Data System (ADS)
Bhattacharjee, Anindita; Das, M. K.
2017-05-01
Role of fluctuating threshold on neuronal dynamics is investigated. The threshold function is assumed to follow a normal probability distribution. Standard deviation of inter-spike interval of the response is computed as an indicator of irregularity in spike emission. It has been observed that, the irregularity in spiking is more if the threshold variation is more. A significant change in modal characteristics of Inter Spike Intervals (ISI) is seen to occur as a function of fluctuation parameter. Investigation is further carried out for coupled system of neurons. Cooperative dynamics of coupled neurons are discussed in view of synchronization. Total and partial synchronization regimes are depicted with the help of contour plots of synchrony measure under various conditions. Results of this investigation may provide a basis for exploring the complexities of neural communication and brain functioning.
Earthquake effects in thermal neutron variations at the high-altitude station of Northern
NASA Astrophysics Data System (ADS)
Antonova, Valentina; Chubenko, Alexandr; Kryukov, Sergey; Lutsenko, Vadim
2016-04-01
Results of study of thermal neutron variations under various space and geophysical conditions on the basis of measurements on stationary installations with high statistical accuracy are presented. Installations are located close to the fault of the earth's crust at the high-altitude station of cosmic rays (3340 m above sea level, 43.02 N, 76.56 E, 20 km from Almaty) in the mountains of Northern Tien-Shan. Responses of the most effective gelio- and geophysical events (variations of atmospheric pressure, coronal mass ejections, earthquakes) has consistently considered in the variations of the thermal neutron flux and compared with variations of high-energy neutrons (standard monitor 18NM64) of galactic origin during these periods. Coefficients of correlation were calculated between data of thermal neutron detectors and data of the neutron monitor, recording the intensity of high-energy particles. High correlation coefficients and similarity of responses to changes of space and geophysical conditions are obtained, that confirms the conclusion of the genetic connection of thermal neutrons with high-energy neutrons of galactic origin and suggests same sources of disturbances in the absence of seismic activity. Observations and analysis of experimental data during the activation of seismic activity in the vicinity of Almaty showed the frequent breakdown of the correlation between the intensity of thermal and high-energy neutrons and the absence of similarity between variations during these periods. We suppose that the additional thermal neutron flux of the lithospheric origin appears under these conditions. Method of separating of thermal neutron flux variations of the lithospheric origin from neutrons variations generated in the atmosphere by subtracting the normalized data is proposed, taking into account the conclusion that variations caused with the atmospheric and interplanetary origins in thermal neutron detectors are similar to variations of high-energy neutrons, and the probability of detecting by 18NM64 monitor of thermal neutrons is extremely low (less than 0, 01). We used it for analysis variations of thermal neutrons during earthquakes 2006-2015. The catalog of earthquakes in the vicinity of Almaty with intensity ≥ 3b, including 25 events, is composed on the basis of observations of the Kazakhstan National Data center. Experimental data of registration of thermal and high-energy neutrons (≥ 200 MeV) with duration not less than 14 days are prepared for an each event. The main statistical characteristics of experimental data are calculated and the normalization is carried out. The increase of thermal neutrons flux of the lithospheric origin during of seismic processes activation is observed for ~ 60% of events. However, before the earthquake the increase of thermal neutron flux is observed only for ~ 30-35% of events. It is shown that the amplitude of the additional thermal neutron flux from the Earth's crust is equal to 5-7% of the background level. Sometimes it reaches values of 10-12%. We propose to employ method of allocating the thermal neutron flux of the lithospheric origin for short-term prediction of earthquakes in seismoactive regions.
An Integrated Approach for RNA-seq Data Normalization.
Yang, Shengping; Mercante, Donald E; Zhang, Kun; Fang, Zhide
2016-01-01
DNA copy number alteration is common in many cancers. Studies have shown that insertion or deletion of DNA sequences can directly alter gene expression, and significant correlation exists between DNA copy number and gene expression. Data normalization is a critical step in the analysis of gene expression generated by RNA-seq technology. Successful normalization reduces/removes unwanted nonbiological variations in the data, while keeping meaningful information intact. However, as far as we know, no attempt has been made to adjust for the variation due to DNA copy number changes in RNA-seq data normalization. In this article, we propose an integrated approach for RNA-seq data normalization. Comparisons show that the proposed normalization can improve power for downstream differentially expressed gene detection and generate more biologically meaningful results in gene profiling. In addition, our findings show that due to the effects of copy number changes, some housekeeping genes are not always suitable internal controls for studying gene expression. Using information from DNA copy number, integrated approach is successful in reducing noises due to both biological and nonbiological causes in RNA-seq data, thus increasing the accuracy of gene profiling.
Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia
2014-05-17
High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository.
2014-01-01
Background High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Results Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the “common reference design” and processed as “pseudo-single-channel”. They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription–polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Conclusions Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository. PMID:24886675
NASA Astrophysics Data System (ADS)
Maes, Michael; de Meyer, Frans; Peeters, Dirk; Meltzer, Herbert; Cosyns, Paul; Schotte, Chris
1992-12-01
Recently, true seasonal variation with significant periodicities (circannual, semiannual, circatrimensual, circabimensual) and a significant meteotropism have been observed in a number of self-rated characteristics of normal man (arousal, mood, physiology and social behaviour). In order to replicate these findings, two normal controls (a married couple) were asked daily to complete a self-rating scale concerned with the characteristics mentioned above during one calendar year. By means of time series analysis, significant rhythmicities with recurrent cycles in the autorhythmometric data of all of the above characteristics were found. An important part of the variance in these characteristics was found, using multiple regression, to be related to various weather variables, such as mean atmospheric pressure, temperature, relative humidity, wind speed, minutes of sunlight/day and precipitation/day. These results support the hypothesis that temporal variations in human psychological and physiological characteristics may be dictated by the composite effects of past and present atmospheric activity.
Ogawa, Y; Wada, B; Taniguchi, K; Miyasaka, S; Imaizumi, K
2015-12-01
This study clarifies the anthropometric variations of the Japanese face by presenting large-sample population data of photo anthropometric measurements. The measurements can be used as standard reference data for the personal identification of facial images in forensic practices. To this end, three-dimensional (3D) facial images of 1126 Japanese individuals (865 male and 261 female Japanese individuals, aged 19-60 years) were acquired as samples using an already validated 3D capture system, and normative anthropometric analysis was carried out. In this anthropometric analysis, first, anthropological landmarks (22 items, i.e., entocanthion (en), alare (al), cheilion (ch), zygion (zy), gonion (go), sellion (se), gnathion (gn), labrale superius (ls), stomion (sto), labrale inferius (li)) were positioned on each 3D facial image (the direction of which had been adjusted to the Frankfort horizontal plane as the standard position for appropriate anthropometry), and anthropometric absolute measurements (19 items, i.e., bientocanthion breadth (en-en), nose breadth (al-al), mouth breadth (ch-ch), bizygomatic breadth (zy-zy), bigonial breadth (go-go), morphologic face height (se-gn), upper-lip height (ls-sto), lower-lip height (sto-li)) were exported using computer software for the measurement of a 3D digital object. Second, anthropometric indices (21 items, i.e., (se-gn)/(zy-zy), (en-en)/(al-al), (ls-li)/(ch-ch), (ls-sto)/(sto-li)) were calculated from these exported measurements. As a result, basic statistics, such as the mean values, standard deviations, and quartiles, and details of the distributions of these anthropometric results were shown. All of the results except "upper/lower lip ratio (ls-sto)/(sto-li)" were normally distributed. They were acquired as carefully as possible employing a 3D capture system and 3D digital imaging technologies. The sample of images was much larger than any Japanese sample used before for the purpose of personal identification. The measurements will be useful as standard reference data for forensic practices and as material data for future studies in this field. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Löwemark, L.; Chen, H.-F.; Yang, T.-N.; Kylander, M.; Yu, E.-F.; Hsu, Y.-W.; Lee, T.-Q.; Song, S.-R.; Jarvis, S.
2011-04-01
X-ray fluorescence (XRF) scanning of unlithified, untreated sediment cores is becoming an increasingly common method used to obtain paleoproxy data from lake records. XRF-scanning is fast and delivers high-resolution records of relative variations in the elemental composition of the sediment. However, lake sediments display extreme variations in their organic matter content, which can vary from just a few percent to well over 50%. As XRF scanners are largely insensitive to organic material in the sediment, increasing levels of organic material effectively dilute those components that can be measured, such as the lithogenic material (the closed-sum effect). Consequently, in sediments with large variations in organic material, the measured variations in an element will to a large extent mirror the changes in organic material. It is therefore necessary to normalize the elements in the lithogenic component of the sediment against a conservative element to allow changes in the input of the elements to be addressed. In this study we show that Al, which is the lightest element that can be measured using the Itrax XRF-scanner, can be used to effectively normalize the elements of the lithogenic fraction of the sediment against variations in organic content. We also show that care must be taken when choosing resolution and exposure time to ensure optimal output from the measurements.
Three dimensional steady subsonic Euler flows in bounded nozzles
NASA Astrophysics Data System (ADS)
Chen, Chao; Xie, Chunjing
The existence and uniqueness of three dimensional steady subsonic Euler flows in rectangular nozzles were obtained when prescribing normal component of momentum at both the entrance and exit. If, in addition, the normal component of the voriticity and the variation of Bernoulli's function at the entrance are both zero, then there exists a unique subsonic potential flow when the magnitude of the normal component of the momentum is less than a critical number. As the magnitude of the normal component of the momentum approaches the critical number, the associated flows converge to a subsonic-sonic flow. Furthermore, when the normal component of vorticity and the variation of Bernoulli function are both small, the existence and uniqueness of subsonic Euler flows with non-zero vorticity are established. The proof of these results is based on a new formulation for the Euler system, a priori estimate for nonlinear elliptic equations with nonlinear boundary conditions, detailed study for a linear div-curl system, and delicate estimate for the transport equations.
MacDougall, Margaret
2015-10-31
The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
D'Costa, Susan; Blouin, Veronique; Broucque, Frederic; Penaud-Budloo, Magalie; François, Achille; Perez, Irene C; Le Bec, Christine; Moullier, Philippe; Snyder, Richard O; Ayuso, Eduard
2016-01-01
Clinical trials using recombinant adeno-associated virus (rAAV) vectors have demonstrated efficacy and a good safety profile. Although the field is advancing quickly, vector analytics and harmonization of dosage units are still a limitation for commercialization. AAV reference standard materials (RSMs) can help ensure product safety by controlling the consistency of assays used to characterize rAAV stocks. The most widely utilized unit of vector dosing is based on the encapsidated vector genome. Quantitative polymerase chain reaction (qPCR) is now the most common method to titer vector genomes (vg); however, significant inter- and intralaboratory variations have been documented using this technique. Here, RSMs and rAAV stocks were titered on the basis of an inverted terminal repeats (ITRs) sequence-specific qPCR and we found an artificial increase in vg titers using a widely utilized approach. The PCR error was introduced by using single-cut linearized plasmid as the standard curve. This bias was eliminated using plasmid standards linearized just outside the ITR region on each end to facilitate the melting of the palindromic ITR sequences during PCR. This new "Free-ITR" qPCR delivers vg titers that are consistent with titers obtained with transgene-specific qPCR and could be used to normalize in-house product-specific AAV vector standards and controls to the rAAV RSMs. The free-ITR method, including well-characterized controls, will help to calibrate doses to compare preclinical and clinical data in the field.
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
Morrell, Jane M; Johannisson, Anders; Dalin, Anne-Marie; Hammar, Linda; Sandebert, Thomas; Rodriguez-Martinez, Heriberto
2008-01-01
Background Artificial insemination is not as widely used in horses as in other domestic species, such as dairy cattle and pigs, partly because of the wide variation in sperm quality between stallion ejaculates and partly due to decreased fertility following the use of cooled transported spermatozoa. Furthermore, predictive tests for sperm fertilising ability are lacking. The objective of the present study was to assess sperm morphology and chromatin integrity in ejaculates obtained from 11 warmblood breeding stallions in Sweden, and to evaluate the relationship of these parameters to pregnancy rates to investigate the possibility of using these tests predictively. Methods Aliquots from fortyone ejaculates, obtained as part of the normal semen collection schedule at the Swedish National Stud, were used for morphological analysis by light microscopy, whereas thirtyseven were used for chromatin analysis (SCSA) by flow cytometry. The outcome of inseminations using these ejaculates was made available later in the same year. Results Ranges for the different parameters were as follows; normal morphology, 27–79.5%; DNA-fragmentation index (DFI), 4.8–19.0%; standard deviation of DNA fragmentation index (SD_DFI) 41.5–98.9, and mean of DNA fragmentation index (mean_DFI), 267.7–319.5. There was considerable variation among stallions, which was statistically significant for all these parameters except for mean_DFI (P < 0.001, P < 0.01, P < 0.001 and P < 0.2 respectively). There was a negative relationship between normal morphology and DFI (P < 0.05), between normal morphology and SD_DFI (P < 0.001), and between normal morphology and mean_DFI (P < 0.05). For specific defects, there was a direct relationship between the incidence of pear-shaped sperm heads and DFI (P < 0.05), and also nuclear pouches and DFI (P < 0.001), indicating that either morphological analysis or chromatin analysis was able to identify abnormalities in spermiogenesis that could compromise DNA-integrity. A positive relationship was found between normal morphology and pregnancy rate following insemination (r = 0.789; P < 0.01) and a negative relationship existed between DFI and pregnancy rate (r = -0.63; P < 0.05). Sperm motility, assessed subjectively, was not related to conception rate. Conclusion Either or both of the parameters, sperm morphology and sperm chromatin integrity, seem to be useful in predicting the fertilising ability of stallion ejaculates, particularly in determining cases of sub-fertility. PMID:18179691
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-17
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less
NASA Astrophysics Data System (ADS)
Lee, Hyun Jeong; Yea, Ji Woon; Oh, Se An
2015-07-01
Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT by using a video-coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by using a realtime position management (RPM) respiratory gating system (Varian, USA), and the patients were trained using the video-coaching respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and the standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and the displacement. The standard deviation of the guided breathing decreased to 48.8% in the inhale peak and 24.2% in the exhale peak compared with the values for the free breathing of patient 6. The standard deviation of the respiratory cycle was found to be decreased when using the respiratory guiding system. The respiratory regularity was significantly improved when using the video-coaching respiration guiding system. Therefore, the system is useful for improving the accuracy and the efficiency of RGRT.
Koh, Y-G.; Son, J.; Kwon, S-K.; Kim, H-J.; Kang, K-T.
2017-01-01
Objectives Preservation of both anterior and posterior cruciate ligaments in total knee arthroplasty (TKA) can lead to near-normal post-operative joint mechanics and improved knee function. We hypothesised that a patient-specific bicruciate-retaining prosthesis preserves near-normal kinematics better than standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining prostheses in TKA. Methods We developed the validated models to evaluate the post-operative kinematics in patient-specific bicruciate-retaining, standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under gait and deep knee bend loading conditions using numerical simulation. Results Tibial posterior translation and internal rotation in patient-specific bicruciate-retaining prostheses preserved near-normal kinematics better than other standard off-the-shelf prostheses under gait loading conditions. Differences from normal kinematics were minimised for femoral rollback and internal-external rotation in patient-specific bicruciate-retaining, followed by standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under deep knee bend loading conditions. Moreover, the standard off-the-shelf posterior cruciate-retaining TKA in this study showed the most abnormal performance in kinematics under gait and deep knee bend loading conditions, whereas patient-specific bicruciate-retaining TKA led to near-normal kinematics. Conclusion This study showed that restoration of the normal geometry of the knee joint in patient-specific bicruciate-retaining TKA and preservation of the anterior cruciate ligament can lead to improvement in kinematics compared with the standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining TKA. Cite this article: Y-G. Koh, J. Son, S-K. Kwon, H-J. Kim, O-R. Kwon, K-T. Kang. Preservation of kinematics with posterior cruciate-, bicruciate- and patient-specific bicruciate-retaining prostheses in total knee arthroplasty by using computational simulation with normal knee model. Bone Joint Res 2017;6:557–565. DOI: 10.1302/2046-3758.69.BJR-2016-0250.R1. PMID:28947604
Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.
Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira
2012-07-15
Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com
Yu, Alan C. L.
2010-01-01
Variation is a ubiquitous feature of speech. Listeners must take into account context-induced variation to recover the interlocutor's intended message. When listeners fail to normalize for context-induced variation properly, deviant percepts become seeds for new perceptual and production norms. In question is how deviant percepts accumulate in a systematic fashion to give rise to sound change (i.e., new pronunciation norms) within a given speech community. The present study investigated subjects' classification of /s/ and // before /a/ or /u/ spoken by a male or a female voice. Building on modern cognitive theories of autism-spectrum condition, which see variation in autism-spectrum condition in terms of individual differences in cognitive processing style, we established a significant correlation between individuals' normalization for phonetic context (i.e., whether the following vowel is /a/ or /u/) and talker voice variation (i.e., whether the talker is male or female) in speech and their “autistic” traits, as measured by the Autism Spectrum Quotient (AQ). In particular, our mixed-effect logistic regression models show that women with low AQ (i.e., the least “autistic”) do not normalize for phonetic coarticulation as much as men and high AQ women. This study provides first direct evidence that variability in human's ability to compensate for context-induced variations in speech perceptually is governed by the individual's sex and cognitive processing style. These findings lend support to the hypothesis that the systematic infusion of new linguistic variants (i.e., the deviant percepts) originate from a sub-segment of the speech community that consistently under-compensates for contextual variation in speech. PMID:20808859
Non-specific filtering of beta-distributed data.
Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D
2014-06-19
Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H
2009-09-01
Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.
Accuracy of femoral templating in reproducing anatomical femoral offset in total hip replacement.
Davies, H; Foote, J; Spencer, R F
2007-01-01
Restoration of hip biomechanics is a crucial component of successful total hip replacement. Preoperative templating is recommended to ensure that the size and orientation of implants is optimised. We studied how closely natural femoral offset could be reproduced using the manufacturers' templates for 10 femoral stems in common use in the UK. A series of 23 consecutive preoperative radiographs from patients who had undergone unilateral total hip replacement for unilateral osteoarthritis of the hip was employed. The change in offset between the templated position of the best-fitting template and the anatomical centre of the hip was measured. The templates were then ranked according to their ability to reproduce the normal anatomical offset. The most accurate was the CPS-Plus (Root Mean Square Error 2.0 mm) followed in rank order by: C stem (2.16), CPT (2.40), Exeter (3.23), Stanmore (3.28), Charnley (3.65), Corail (3.72), ABG II (4.30), Furlong HAC (5.08) and Furlong modular (7.14). A similar pattern of results was achieved when the standard error of variability of offset was analysed. We observed a wide variation in the ability of the femoral prosthesis templates to reproduce normal femoral offset. This variation was independent of the seniority of the observer. The templates of modern polished tapered stems with high modularity were best able to reproduce femoral offset. The current move towards digitisation of X-rays may offer manufacturers an opportunity to improve template designs in certain instances, and to develop appropriate computer software.
Improving electrofishing catch consistency by standardizing power
Burkhardt, Randy W.; Gutreuter, Steve
1995-01-01
The electrical output of electrofishing equipment is commonly standardized by using either constant voltage or constant amperage, However, simplified circuit and wave theories of electricity suggest that standardization of power (wattage) available for transfer from water to fish may be critical for effective standardization of electrofishing. Electrofishing with standardized power ensures that constant power is transferable to fish regardless of water conditions. The in situ performance of standardized power output is poorly known. We used data collected by the interagency Long Term Resource Monitoring Program (LTRMP) in the upper Mississippi River system to assess the effectiveness of standardizing power output. The data consisted of 278 electrofishing collections, comprising 9,282 fishes in eight species groups, obtained during 1990 from main channel border, backwater, and tailwater aquatic areas in four reaches of the upper Mississippi River and one reach of the Illinois River. Variation in power output explained an average of 14.9% of catch variance for night electrofishing and 12.1 % for day electrofishing. Three patterns in catch per unit effort were observed for different species: increasing catch with increasing power, decreasing catch with increasing power, and no power-related pattern. Therefore, in addition to reducing catch variation, controlling power output may provide some capability to select particular species. The LTRMP adopted standardized power output beginning in 1991; standardized power output is adjusted for variation in water conductivity and water temperature by reference to a simple chart. Our data suggest that by standardizing electrofishing power output, the LTRMP has eliminated substantial amounts of catch variation at virtually no additional cost.
Normal dimensions of the posterior pituitary bright spot on magnetic resonance imaging.
Côté, Martin; Salzman, Karen L; Sorour, Mohammad; Couldwell, William T
2014-02-01
The normal pituitary bright spot seen on unenhanced T1-weighted MRI is thought to result from the T1-shortening effect of the vasopressin stored in the posterior pituitary. Individual variations in its size may be difficult to differentiate from pathological conditions resulting in either absence of the pituitary bright spot or in T1-hyperintense lesions of the sella. The objective of this paper was to define a range of normal dimensions of the pituitary bright spot and to illustrate some of the most commonly encountered pathologies that result in absence or enlargement of the pituitary bright spot. The authors selected normal pituitary MRI studies from 106 patients with no pituitary abnormality. The size of each pituitary bright spot was measured in the longest axis and in the dimension perpendicular to this axis to describe the typical dimensions. The authors also present cases of patients with pituitary abnormalities to highlight the differences and potential overlap between normal and pathological pituitary imaging. All of the studies evaluated were found to have pituitary bright spots, and the mean dimensions were 4.8 mm in the long axis and 2.4 mm in the short axis. The dimension of the pituitary bright spot in the long axis decreased with patient age. The distribution of dimensions of the pituitary bright spot was normal, indicating that 99.7% of patients should have a pituitary bright spot measuring between 1.2 and 8.5 mm in its long axis and between 0.4 and 4.4 mm in its short axis, an interval corresponding to 3 standard deviations below and above the mean. In cases where the dimension of the pituitary bright spot is outside this range, pathological conditions should be considered. The pituitary bright spot should always be demonstrated on T1-weighted MRI, and its dimensions should be within the identified normal range in most patients. Outside of this range, pathological conditions affecting the pituitary bright spot should be considered.
Freeman, Kathleen P; Baral, Randolph M; Dhand, Navneet K; Nielsen, Søren Saxmose; Jensen, Asger L
2017-06-01
The recent creation of a veterinary clinical pathology biologic variation website has highlighted the need to provide recommendations for future studies of biologic variation in animals in order to help standardize and improve the quality of published information and to facilitate review and selection of publications as standard references. The following recommendations are provided in the format and order commonly found in veterinary publications. A checklist is provided to aid in planning, implementing, and evaluating veterinary studies on biologic variation (Appendix S1). These recommendations provide a valuable resource for clinicians, laboratorians, and researchers interested in conducting studies of biologic variation and in determining the quality of studies of biologic variation in veterinary laboratory testing. © 2017 American Society for Veterinary Clinical Pathology.
Control Variates and Optimal Designs in Metamodeling
2013-03-01
27 2.4.5 Selection of Control Variates for Inclusion in Model...meet the normality assumption (Nelson 1990, Nelson and Yang 1992, Anonuevo and Nelson 1988). Jacknifing, splitting, and bootstrapping can be used to...freedom to estimate the variance are lost due to being used for the control variate inclusion . This means the variance reduction achieved must now be
Bone development in laboratory mammals used in developmental toxicity studies.
DeSesso, John M; Scialli, Anthony R
2018-06-19
Evaluation of the skeleton in laboratory animals is a standard component of developmental toxicology testing. Standard methods of performing the evaluation have been established, and modification of the evaluation using imaging technologies is under development. The embryology of the rodent, rabbit, and primate skeleton has been characterized in detail and summarized herein. The rich literature on variations and malformations in skeletal development that can occur in the offspring of normal animals and animals exposed to test articles in toxicology studies is reviewed. These perturbations of skeletal development include ossification delays, alterations in number, shape, and size of ossification centers, and alterations in numbers of ribs and vertebrae. Because the skeleton is undergoing developmental changes at the time fetuses are evaluated in most study designs, transient delays in development can produce apparent findings of abnormal skeletal structure. The determination of whether a finding represents a permanent change in embryo development with adverse consequences for the organism is important in study interpretation. Knowledge of embryological processes and schedules can assist in interpretation of skeletal findings. © 2018 The Authors. Birth Defects Research Published by Wiley Periodicals, Inc.
Wéra, A-C; Barazzuol, L; Jeynes, J C G; Merchant, M J; Suzuki, M; Kirkby, K J
2014-08-07
It is well known that broad beam irradiation with heavy ions leads to variation in the number of hit(s) received by each cell as the distribution of particles follows the Poisson statistics. Although the nucleus area will determine the number of hit(s) received for a given dose, variation amongst its irradiated cell population is generally not considered. In this work, we investigate the effect of the nucleus area's distribution on the survival fraction. More specifically, this work aims to explain the deviation, or tail, which might be observed in the survival fraction at high irradiation doses. For this purpose, the nucleus area distribution was added to the beam Poisson statistics and the Linear-Quadratic model in order to fit the experimental data. As shown in this study, nucleus size variation, and the associated Poisson statistics, can lead to an upward survival trend after broad beam irradiation. The influence of the distribution parameters (mean area and standard deviation) was studied using a normal distribution, along with the Linear-Quadratic model parameters (α and β). Finally, the model proposed here was successfully tested to the survival fraction of LN18 cells irradiated with a 85 keV µm(- 1) carbon ion broad beam for which the distribution in the area of the nucleus had been determined.
Meija, Juris; Chartrand, Michelle M G
2018-01-01
Isotope delta measurements are normalized against international reference standards. Although multi-point normalization is becoming a standard practice, the existing uncertainty evaluation practices are either undocumented or are incomplete. For multi-point normalization, we present errors-in-variables regression models for explicit accounting of the measurement uncertainty of the international standards along with the uncertainty that is attributed to their assigned values. This manuscript presents framework to account for the uncertainty that arises due to a small number of replicate measurements and discusses multi-laboratory data reduction while accounting for inevitable correlations between the laboratories due to the use of identical reference materials for calibration. Both frequentist and Bayesian methods of uncertainty analysis are discussed.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
The "Second Place" Problem: Assistive Technology in Sports and (Re) Constructing Normal.
Baker, D A
2016-02-01
Objections to the use of assistive technologies (such as prostheses) in elite sports are generally raised when the technology in question is perceived to afford the user a potentially "unfair advantage," when it is perceived as a threat to the purity of the sport, and/or when it is perceived as a precursor to a slippery slope toward undesirable changes in the sport. These objections rely on being able to quantify standards of "normal" within a sport so that changes attributed to the use of assistive technology can be judged as causing a significant deviation from some baseline standard. This holds athletes using assistive technologies accountable to standards that restrict their opportunities to achieve greatness, while athletes who do not use assistive technologies are able to push beyond the boundaries of these standards without moral scrutiny. This paper explores how constructions of fairness and "normality" impact athletes who use assistive technology to compete in a sporting venue traditionally populated with "able-bodied" competitors. It argues that the dynamic and obfuscated construction of "normal" standards in elite sports should move away from using body performance as the measuring stick of "normal," toward alternate forms of constructing norms such as defining, quantifying, and regulating the mechanical actions that constitute the critical components of a sport. Though framed within the context of elite sports, this paper can be interpreted more broadly to consider problems with defining "normal" bodies in a society in which technologies are constantly changing our abilities and expectations of what normal means.
Oude Lansink, I L B; van Kouwenhove, L; Dijkstra, P U; Postema, K; Hijmans, J M
2017-10-01
Step width is increased during dual-belt treadmill walking, in self-paced mode with virtual reality. Generally a familiarization period is thought to be necessary to normalize step width. The aim of this randomised study was to analyze the effects of two interventions on step width, to reduce the familiarization period. We used the GRAIL (Gait Real-time Analysis Interactive Lab), a dual-belt treadmill with virtual reality in the self-paced mode. Thirty healthy young adults were randomly allocated to three groups and asked to walk at their preferred speed for 5min. In the first session, the control-group received no intervention, the 'walk-on-the-line'-group was instructed to walk on a line, projected on the between-belt gap of the treadmill and the feedback-group received feedback about their current step width and were asked to reduce it. Interventions started after 1min and lasted 1min. During the second session, 7-10days later, no interventions were given. Linear mixed modeling showed that interventions did not have an effect on step width after the intervention period in session 1. Initial step width (second 30s) of session 1 was larger than initial step width of session 2. Step width normalized after 2min and variation in step width stabilized after 1min. Interventions do not reduce step width after intervention period. A 2-min familiarization period is sufficient to normalize and stabilize step width, in healthy young adults, regardless of interventions. A standardized intervention to normalize step width is not necessary. Copyright © 2017 Elsevier B.V. All rights reserved.
The austral peregrine falcon: Color variation, productivity, and pesticides
Ellis, D.H.
1985-01-01
The austral peregrine falcon (Falco peregrinus cassini) was studied in the Andean foot- hills and across the Patagonian steppe from November to December 1981. The birds under study (18 pairs) were reproducing at or near normal (pre-DDT) levels for other races. Pesticide residues, while elevated, were well below the values associated with reproductive failure in other populations. With one exception, eggshells were not abnormally thin. The peregrine falcon in Patagonia exhibits extreme color variation. Pallid birds are nearly pure white below (light cream as juveniles), whereas normally pigmented birds are black-crowned and conspicuously barred with black ventrally. Rare individuals of the Normal Phase display black heads, broad black ventral barring, and warm reddish-brown ventral background coloration.
Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization
NASA Astrophysics Data System (ADS)
Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad
2017-02-01
Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.
Das, B; Shikdar, A A
1999-01-01
The participative standard with feedback condition was superior to the assigned difficult (140% of normal) standard with feedback condition in terms of worker productivity. The percentage increase in worker productivity with the participative standard and feedback condition was 46%, whereas the increase in the assigned difficult standard with feedback was 23%, compared to the control group (no standard, no feedback). Worker productivity also improved significantly as a result of assigning a normal (100%) production standard with feedback, compared to the control group, and the increase was 12%. The participative standard with feedback condition emerges as the optimum strategy for improving worker productivity in a repetitive industrial production task.
Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity
McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.
2011-01-01
Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756
Zink, Jean-Vincent; Souteyrand, Philippe; Guis, Sandrine; Chagnaud, Christophe; Fur, Yann Le; Militianu, Daniela; Mattei, Jean-Pierre; Rozenbaum, Michael; Rosner, Itzhak; Guye, Maxime; Bernard, Monique; Bendahan, David
2015-01-01
AIM: To quantify the wrist cartilage cross-sectional area in humans from a 3D magnetic resonance imaging (MRI) dataset and to assess the corresponding reproducibility. METHODS: The study was conducted in 14 healthy volunteers (6 females and 8 males) between 30 and 58 years old and devoid of articular pain. Subjects were asked to lie down in the supine position with the right hand positioned above the pelvic region on top of a home-built rigid platform attached to the scanner bed. The wrist was wrapped with a flexible surface coil. MRI investigations were performed at 3T (Verio-Siemens) using volume interpolated breath hold examination (VIBE) and dual echo steady state (DESS) MRI sequences. Cartilage cross sectional area (CSA) was measured on a slice of interest selected from a 3D dataset of the entire carpus and metacarpal-phalangeal areas on the basis of anatomical criteria using conventional image processing radiology software. Cartilage cross-sectional areas between opposite bones in the carpal region were manually selected and quantified using a thresholding method. RESULTS: Cartilage CSA measurements performed on a selected predefined slice were 292.4 ± 39 mm2 using the VIBE sequence and slightly lower, 270.4 ± 50.6 mm2, with the DESS sequence. The inter (14.1%) and intra (2.4%) subject variability was similar for both MRI methods. The coefficients of variation computed for the repeated measurements were also comparable for the VIBE (2.4%) and the DESS (4.8%) sequences. The carpus length averaged over the group was 37.5 ± 2.8 mm with a 7.45% between-subjects coefficient of variation. Of note, wrist cartilage CSA measured with either the VIBE or the DESS sequences was linearly related to the carpal bone length. The variability between subjects was significantly reduced to 8.4% when the CSA was normalized with respect to the carpal bone length. CONCLUSION: The ratio between wrist cartilage CSA and carpal bone length is a highly reproducible standardized measurement which normalizes the natural diversity between individuals. PMID:26396941
Chu, Cindy S; Bancone, Germana; Moore, Kerryn A; Win, Htun Htun; Thitipanawan, Niramon; Po, Christina; Chowwiwat, Nongnud; Raksapraidee, Rattanaporn; Wilairisak, Pornpimon; Phyo, Aung Pyae; Keereecharoen, Lily; Proux, Stéphane; Charunwatthana, Prakaykaew; Nosten, François; White, Nicholas J
2017-02-01
Radical cure of Plasmodium vivax malaria with 8-aminoquinolines (primaquine or tafenoquine) is complicated by haemolysis in individuals with glucose-6-phosphate dehydrogenase (G6PD) deficiency. G6PD heterozygous females, because of individual variation in the pattern of X-chromosome inactivation (Lyonisation) in erythroid cells, may have low G6PD activity in the majority of their erythrocytes, yet are usually reported as G6PD "normal" by current phenotypic screening tests. Their haemolytic risk when treated with 8-aminoquinolines has not been well characterized. In a cohort study nested within a randomised clinical trial that compared different treatment regimens for P. vivax malaria, patients with a normal standard NADPH fluorescent spot test result (≳30%-40% of normal G6PD activity) were randomised to receive 3 d of chloroquine or dihydroartemisinin-piperaquine in combination with primaquine, either the standard high dose of 0.5 mg base/kg/day for 14 d or a higher dose of 1 mg base/kg/d for 7 d. Patterns of haemolysis were compared between G6PD wild-type and G6PD heterozygous female participants. Between 21 February 2012 and 04 July 2014, 241 female participants were enrolled, of whom 34 were heterozygous for the G6PD Mahidol variant. Haemolysis was substantially greater and a larger proportion of participants reached the threshold of clinically significant haemolysis (fractional haematocrit reduction >25%) in G6PD heterozygotes taking the higher (7 d) primaquine dose (9/17 [53%]) compared with G6PD heterozygotes taking the standard high (14 d) dose (2/16 [13%]; p = 0.022). In heterozygotes, the mean fractional haematocrit reductions were correspondingly greater with the higher primaquine dose (7-d regimen): -20.4% (95% CI -26.0% to -14.8%) (nadir on day 5) compared with the standard high (14 d) dose: -13.1% (95% CI -17.6% to -8.6%) (nadir day 6). Two heterozygotes taking the higher (7 d) primaquine dose required blood transfusion. In wild-type participants, mean haematocrit reductions were clinically insignificant and similar with both doses: -5.8 (95% CI -7.2% to -4.4%) (nadir day 3) compared with -5.5% (95% CI -7.4% to -3.7%) (nadir day 4), respectively. Limitations to this nested cohort study are that the primary objective of the trial was designed to measure efficacy and not haemolysis in relation to G6PD genotype and that the heterozygote groups were small. Higher daily doses of primaquine have the potential to cause clinically significant haemolysis in G6PD heterozygous females who are reported as phenotypically normal with current point of care tests. ClinicalTrials.gov NCT01640574.
Zierler, R E; Phillips, D J; Beach, K W; Primozich, J F; Strandness, D E
1987-08-01
The combination of a B-mode imaging system and a single range-gate pulsed Doppler flow velocity detector (duplex scanner) has become the standard noninvasive method for assessing the extracranial carotid artery. However, a significant limitation of this approach is the small area of vessel lumen that can be evaluated at any one time. This report describes a new duplex instrument that displays blood flow as colors superimposed on a real-time B-mode image. Returning echoes from a linear array of transducers are continuously processed for amplitude and phase. Changes in phase are produced by tissue motion and are used to calculate Doppler shift frequency. This results in a color assignment: red and blue indicate direction of flow with respect to the ultrasound beam, and lighter shades represent higher velocities. The carotid bifurcations of 10 normal subjects were studied. Changes in flow velocities across the arterial lumen were clearly visualized as varying shades of red or blue during the cardiac cycle. A region of flow separation was observed in all proximal internal carotids as a blue area located along the outer wall of the bulb. Thus, it is possible to detect the localized flow patterns that characterize normal carotid arteries. Other advantages of color-flow imaging include the ability to rapidly identify the carotid bifurcation branches and any associated anatomic variations.
Evaluation of CT-based SUV normalization
NASA Astrophysics Data System (ADS)
Devriese, Joke; Beels, Laurence; Maes, Alex; Van de Wiele, Christophe; Pottel, Hans
2016-09-01
The purpose of this study was to determine patients’ lean body mass (LBM) and lean tissue (LT) mass using a computed tomography (CT)-based method, and to compare standardized uptake value (SUV) normalized by these parameters to conventionally normalized SUVs. Head-to-toe positron emission tomography (PET)/CT examinations were retrospectively retrieved and semi-automatically segmented into tissue types based on thresholding of CT Hounsfield units (HU). The following HU ranges were used for determination of CT-estimated LBM and LT (LBMCT and LTCT): -180 to -7 for adipose tissue (AT), -6 to 142 for LT, and 143 to 3010 for bone tissue (BT). Formula-estimated LBMs were calculated using formulas of James (1976 Research on Obesity: a Report of the DHSS/MRC Group (London: HMSO)) and Janmahasatian et al (2005 Clin. Pharmacokinet. 44 1051-65), and body surface area (BSA) was calculated using the DuBois formula (Dubois and Dubois 1989 Nutrition 5 303-11). The CT segmentation method was validated by comparing total patient body weight (BW) to CT-estimated BW (BWCT). LBMCT was compared to formula-based estimates (LBMJames and LBMJanma). SUVs in two healthy reference tissues, liver and mediastinum, were normalized for the aforementioned parameters and compared to each other in terms of variability and dependence on normalization factors and BW. Comparison of actual BW to BWCT shows a non-significant difference of 0.8 kg. LBMJames estimates are significantly higher than LBMJanma with differences of 4.7 kg for female and 1.0 kg for male patients. Formula-based LBM estimates do not significantly differ from LBMCT, neither for men nor for women. The coefficient of variation (CV) of SUV normalized for LBMJames (SUVLBM-James) (12.3%) was significantly reduced in liver compared to SUVBW (15.4%). All SUV variances in mediastinum were significantly reduced (CVs were 11.1-12.2%) compared to SUVBW (15.5%), except SUVBSA (15.2%). Only SUVBW and SUVLBM-James show independence from normalization factors. LBMJames seems to be the only advantageous SUV normalization. No advantage of other SUV normalizations over BW could be demonstrated.
Richter, S. Helene; Garner, Joseph P.; Zipser, Benjamin; Lewejohann, Lars; Sachser, Norbert; Touma, Chadi; Schindler, Britta; Chourbaji, Sabine; Brandwein, Christiane; Gass, Peter; van Stipdonk, Niek; van der Harst, Johanneke; Spruijt, Berry; Võikar, Vootele; Wolfer, David P.; Würbel, Hanno
2011-01-01
In animal experiments, animals, husbandry and test procedures are traditionally standardized to maximize test sensitivity and minimize animal use, assuming that this will also guarantee reproducibility. However, by reducing within-experiment variation, standardization may limit inference to the specific experimental conditions. Indeed, we have recently shown in mice that standardization may generate spurious results in behavioral tests, accounting for poor reproducibility, and that this can be avoided by population heterogenization through systematic variation of experimental conditions. Here, we examined whether a simple form of heterogenization effectively improves reproducibility of test results in a multi-laboratory situation. Each of six laboratories independently ordered 64 female mice of two inbred strains (C57BL/6NCrl, DBA/2NCrl) and examined them for strain differences in five commonly used behavioral tests under two different experimental designs. In the standardized design, experimental conditions were standardized as much as possible in each laboratory, while they were systematically varied with respect to the animals' test age and cage enrichment in the heterogenized design. Although heterogenization tended to improve reproducibility by increasing within-experiment variation relative to between-experiment variation, the effect was too weak to account for the large variation between laboratories. However, our findings confirm the potential of systematic heterogenization for improving reproducibility of animal experiments and highlight the need for effective and practicable heterogenization strategies. PMID:21305027
Tamburini, Elena; Mamolini, Elisabetta; De Bastiani, Morena; Marchetti, Maria Gabriella
2016-07-15
Fusarium proliferatum is considered to be a pathogen of many economically important plants, including garlic. The objective of this research was to apply near-infrared spectroscopy (NIRS) to rapidly determine fungal concentration in intact garlic cloves, avoiding the laborious and time-consuming procedures of traditional assays. Preventive detection of infection before seeding is of great interest for farmers, because it could avoid serious losses of yield during harvesting and storage. Spectra were collected on 95 garlic cloves, divided in five classes of infection (from 1-healthy to 5-very highly infected) in the range of fungal concentration 0.34-7231.15 ppb. Calibration and cross validation models were developed with partial least squares regression (PLSR) on pretreated spectra (standard normal variate, SNV, and derivatives), providing good accuracy in prediction, with a coefficient of determination (R²) of 0.829 and 0.774, respectively, a standard error of calibration (SEC) of 615.17 ppb, and a standard error of cross validation (SECV) of 717.41 ppb. The calibration model was then used to predict fungal concentration in unknown samples, peeled and unpeeled. The results showed that NIRS could be used as a reliable tool to directly detect and quantify F. proliferatum infection in peeled intact garlic cloves, but the presence of the external peel strongly affected the prediction reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spickett, Jeffery, E-mail: J.Spickett@curtin.edu.au; Faculty of Health Sciences, School of Public Health, Curtin University, Perth, Western Australia; Katscherian, Dianne
The approaches used for setting or reviewing air quality standards vary from country to country. The purpose of this research was to consider the potential to improve decision-making through integration of HIA into the processes to review and set air quality standards used in Australia. To assess the value of HIA in this policy process, its strengths and weaknesses were evaluated aligned with review of international processes for setting air quality standards. Air quality standard setting programmes elsewhere have either used HIA or have amalgamated and incorporated factors normally found within HIA frameworks. They clearly demonstrate the value of amore » formalised HIA process for setting air quality standards in Australia. The following elements should be taken into consideration when using HIA in standard setting. (a) The adequacy of a mainly technical approach in current standard setting procedures to consider social determinants of health. (b) The importance of risk assessment criteria and information within the HIA process. The assessment of risk should consider equity, the distribution of variations in air quality in different locations and the potential impacts on health. (c) The uncertainties in extrapolating evidence from one population to another or to subpopulations, especially the more vulnerable, due to differing environmental factors and population variables. (d) The significance of communication with all potential stakeholders on issues associated with the management of air quality. In Australia there is also an opportunity for HIA to be used in conjunction with the NEPM to develop local air quality standard measures. The outcomes of this research indicated that the use of HIA for air quality standard setting at the national and local levels would prove advantageous. -- Highlights: • Health Impact Assessment framework has been applied to a policy development process. • HIA process was evaluated for application in air quality standard setting. • Advantages of HIA in the air quality standard setting process are demonstrated.« less
2010-01-01
Background The ability to objectively differentiate exacerbations of chronic obstructive pulmonary disease (COPD) from day-to-day symptom variations would be an important development in clinical practice and research. We assessed the ability of domiciliary pulse oximetry to achieve this. Methods 40 patients with moderate-severe COPD collected daily data on changes in symptoms, heart-rate (HR), oxygen saturation (SpO2) and peak-expiratory flow (PEF) over a total of 2705 days. 31 patients had data suitable for baseline analysis, and 13 patients experienced an exacerbation. Data were expressed as multiples of the standard deviation (SD) observed from each patient when stable. Results In stable COPD, the SD for HR, SpO2 and PEF were approximately 5 min-1, 1% and 10l min-1. There were detectable changes in all three variables just prior to exacerbation onset, greatest 2-3 days following symptom onset. A composite Oximetry Score (mean magnitude of SpO2 fall and HR rise) distinguished exacerbation onset from symptom variation (area under receiver-operating characteristic curve, AUC = 0.832, 95%CI 0.735-0.929, p = 0.003). In the presence of symptoms, a change in Score of ≥1 (average of ≥1SD change in both HR and SpO2) was 71% sensitive and 74% specific for exacerbation onset. Conclusion We have defined normal variation of pulse oximetry variables in a small sample of patients with COPD. A composite HR and SpO2 score distinguished exacerbation onset from symptom variation, potentially facilitating prompt therapy and providing validation of such events in clinical trials. PMID:20961450
Isotope-abundance variations of selected elements (IUPAC technical report)
Coplen, T.B.; Böhlke, J.K.; De Bievre, P.; Ding, T.; Holden, N.E.; Hopple, J.A.; Krouse, H.R.; Lamberty, A.; Peiser, H.S.; Revesz, K.; Rieder, S.E.; Rosman, K.J.R.; Roth, E.; Taylor, P.D.P.; Vocke, R.D.; Xiao, Y.K.
2002-01-01
Documented variations in the isotopic compositions of some chemical elements are responsible for expanded uncertainties in the standard atomic weights published by the Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry. This report summarizes reported variations in the isotopic compositions of 20 elements that are due to physical and chemical fractionation processes (not due to radioactive decay) and their effects on the standard atomic-weight uncertainties. For 11 of those elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine, copper, and selenium), standard atomic-weight uncertainties have been assigned values that are substantially larger than analytical uncertainties because of common isotope-abundance variations in materials of natural terrestrial origin. For 2 elements (chromium and thallium), recently reported isotope-abundance variations potentially are large enough to result in future expansion of their atomic-weight uncertainties. For 7 elements (magnesium, calcium, iron, zinc, molybdenum, palladium, and tellurium), documented isotope variations in materials of natural terrestrial origin are too small to have a significant effect on their standard atomic-weight uncertainties. This compilation indicates the extent to which the atomic weight of an element in a given material may differ from the standard atomic weight of the element. For most elements given above, data are graphically illustrated by a diagram in which the materials are specified in the ordinate and the compositional ranges are plotted along the abscissa in scales of (1) atomic weight, (2) mole fraction of a selected isotope, and (3) delta value of a selected isotope ratio.
Follansbee, Robert
1925-01-01
Records of run-off in the Rocky Mountain States since the nineties and for a few stations since the eighties afford a means of studying the variation in the annual run-off in this region. The data presented in this report show that the variation in annual run-off differs in different areas in the Rocky Mountain region, owing to the differences in the sources of the precipitation in these areas. Except in the drainage basins of streams in northern Montana the year of lowest run-off shown by the records was 1902, when the run-ff at one station was only 36 per cent of the mean run-ff for the periods covered by the several records available. The percentage variation of run-ff for streams in different parts of Colorado is less for any one year than that for streams in the mountain region as a whole, and for streams in the same major drainage basin the annual variation is markedly similar. The influence of topography upon variation in annual run-ff for streams in Colorado is marked, the streams that rise in the central mountain region having a smaller range in variation than the streams that rise on the eastern or western edges of the central mountain mass. The streams that rise on the plains just east of the mountains have a greater variation than those of any of the mountain groups. The ratio of any 10-year mean to the mean for the entire period covered by the records ranges from 72 to 133 per cent. For the South Platte, Arkansas, and Rio Grande the run-off during the nineties was below the normal, but since about 1903 it has been above normal. For the Cache la Poudre low-water periods occurred during the eighties and from 1905 to 1922, but during the nineties the run-off was above the normal.
Improvement of the quality of work in a biochemistry laboratory via measurement system analysis.
Chen, Ming-Shu; Liao, Chen-Mao; Wu, Ming-Hsun; Lin, Chih-Ming
2016-10-31
An adequate and continuous monitoring of operational variations can effectively reduce the uncertainty and enhance the quality of laboratory reports. This study applied the evaluation rule of the measurement system analysis (MSA) method to estimate the quality of work conducted in a biochemistry laboratory. Using the gauge repeatability & reproducibility (GR&R) approach, variations in quality control (QC) data among medical technicians in conducting measurements of five biochemical items, namely, serum glucose (GLU), aspartate aminotransferase (AST), uric acid (UA), sodium (Na) and chloride (Cl), were evaluated. The measurements of the five biochemical items showed different levels of variance among the different technicians, with the variances in GLU measurements being higher than those for the other four items. The ratios of precision-to-tolerance (P/T) for Na, Cl and GLU were all above 0.5, implying inadequate gauge capability. The product variation contribution of Na was large (75.45% and 31.24% in normal and abnormal QC levels, respectively), which showed that the impact of insufficient usage of reagents could not be excluded. With regard to reproducibility, high contributions (of more than 30%) of variation for the selected items were found. These high operator variation levels implied that the possibility of inadequate gauge capacity could not be excluded. The analysis of variance (ANOVA) of GR&R showed that the operator variations in GLU measurements were significant (F=5.296, P=0.001 in the normal level and F=3.399, P=0.015 in the abnormal level, respectively). In addition to operator variations, product variations of Na were also significant for both QC levels. The heterogeneity of variance for the five technicians showed significant differences for the Na and Cl measurements in the normal QC level. The accuracy of QC for five technicians was identified for further operational improvement. This study revealed that MSA can be used to evaluate product and personnel errors and to improve the quality of work in a biochemical laboratory through proper corrective actions.
Unification of height systems in the frame of GGOS
NASA Astrophysics Data System (ADS)
Sánchez, Laura
2015-04-01
Most of the existing vertical reference systems do not fulfil the accuracy requirements of modern Geodesy. They refer to local sea surface levels, are stationary (do not consider variations in time), realize different physical height types (orthometric, normal, normal-orthometric, etc.), and their combination in a global frame presents uncertainties at the metre level. To provide a precise geodetic infrastructure for monitoring the Earth system, the Global Geodetic Observing System (GGOS) of the International Association of Geodesy (IAG), promotes the standardization of the height systems worldwide. The main purpose is to establish a global gravity field-related vertical reference system that (1) supports a highly-precise (at cm-level) combination of physical and geometric heights worldwide, (2) allows the unification of all existing local height datums, and (3) guarantees vertical coordinates with global consistency (the same accuracy everywhere) and long-term stability (the same order of accuracy at any time). Under this umbrella, the present contribution concentrates on the definition and realization of a conventional global vertical reference system; the standardization of the geodetic data referring to the existing height systems; and the formulation of appropriate strategies for the precise transformation of the local height datums into the global vertical reference system. The proposed vertical reference system is based on two components: a geometric component consisting of ellipsoidal heights as coordinates and a level ellipsoid as the reference surface, and a physical component comprising geopotential numbers as coordinates and an equipotential surface defined by a conventional W0 value as the reference surface. The definition of the physical component is based on potential parameters in order to provide reference to any type of physical heights (normal, orthometric, etc.). The conversion of geopotential numbers into metric heights and the modelling of the reference surface (geoid or quasigeoid determination) are considered as steps of the realization. The vertical datum unification strategy is based on (1) the physical connection of height datums to determine their discrepancies, (2) joint analysis of satellite altimetry and tide gauge records to determine time variations of sea level at reference tide gauges, (3) combination of geometrical and physical heights in a well-distributed and high-precise reference frame to estimate the relationship between the individual vertical levels and the global one, and (4) analysis of GNSS time series at reference tide gauges to separate crustal movements from sea level changes. The final vertical transformation parameters are provided by the common adjustment of the observation equations derived from these methods.
Kumari, Anju; Yadav, Sandeep Kumar; Misro, Man Mohan; Ahmad, Jamal; Ali, Sher
2015-12-07
We analyzed 34 azoospermic (AZ), 43 oligospermic (OS), and 40 infertile males with normal spermiogram (INS) together with 55 normal fertile males (NFM) from the Indian population. AZ showed more microdeletions in the AZFa and AZFb regions whereas oligospermic ones showed more microdeletions in the AZFc region. Frequency of the AZF partial deletions was higher in males with spermatogenic impairments than in INS. Significantly, SRY, DAZ and BPY2 genes showed copy number variation across different categories of the patients and much reduced copies of the DYZ1 repeat arrays compared to that in normal fertile males. Likewise, INS showed microdeletions, sequence and copy number variation of several Y linked genes and loci. In the context of infertility, STS deletions and copy number variations both were statistically significant (p = 0.001). Thus, semen samples used during in vitro fertilization (IVF) and assisted reproductive technology (ART) must be assessed for the microdeletions of AZFa, b and c regions in addition to the affected genes reported herein. Present study is envisaged to be useful for DNA based diagnosis of different categories of the infertile males lending support to genetic counseling to the couples aspiring to avail assisted reproductive technologies.
The Effect of Phonological Variation on Adult Learner Comprehension.
ERIC Educational Resources Information Center
Eisenstein, Miriam; Berkowitz, Diana
1981-01-01
Reports on a study of the relationship of English phonological variation to intelligibility for adult second language learners of English. Indicates that learners tested on their ability to understand working-class (New Yorkese), educated (Standard English), and Foreign-accented speakers of English found the standard more intelligible than the…
48 CFR 822.304 - Variations, tolerances, and exemptions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... and Safety Standards Act 822.304 Variations, tolerances, and exemptions. When issuing a contract for nursing home care, a contracting officer may exempt a contractor from certain requirements of the Contract Work Hours and Safety Standards Act (40 U.S.C. 3701-3708) regarding the payment of overtime (see 29 CFR...
48 CFR 822.304 - Variations, tolerances, and exemptions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... and Safety Standards Act 822.304 Variations, tolerances, and exemptions. When issuing a contract for nursing home care, a contracting officer may exempt a contractor from certain requirements of the Contract Work Hours and Safety Standards Act (40 U.S.C. 3701-3708) regarding the payment of overtime (see 29 CFR...
Brasil, Ivelise Regina Canito; de Araujo, Igor Farias; Lima, Adriana Augusta Lopes de Araujo; Melo, Ernesto Lima Araujo; Esmeraldo, Ronaldo de Matos
2018-01-01
To describe the main anatomical variations of the celiac trunk and the hepatic artery at their origins. This was a prospective analysis of 100 consecutive computed tomography angiography studies of the abdomen performed during a one-year period. The findings were stratified according to classification systems devised by Sureka et al. and Michels. The celiac trunk was "normal" (i.e., the hepatogastrosplenic trunk and superior mesenteric artery originating separately from the abdominal aorta) in 43 patients. In our sample, we identified four types of variations of the celiac trunk. Regarding the hepatic artery, a normal anatomical pattern (i.e., the proper hepatic artery being a continuation of the common hepatic artery and bifurcating into the right and left hepatic arteries) was seen in 82 patients. We observed six types of variations of the hepatic artery. We found rates of variations of the hepatic artery that are different from those reported in the literature. Our findings underscore the need for proper knowledge and awareness of these anatomical variations, which can facilitate their recognition and inform decisions regarding the planning of surgical procedures, in order to avoid iatrogenic intraoperative injuries, which could lead to complications.
Intra-individual and inter-individual variations in sperm aneuploidy frequencies in normal men.
Tempest, Helen G; Ko, Evelyn; Rademaker, Alfred; Chan, Peter; Robaire, Bernard; Martin, Renée H
2009-01-01
To investigate whether there are intra-individual and/or inter-individual variations in sperm aneuploidy frequencies within the normal male population, and, if this is the case, whether they are sporadic or time-stable variants. Prospective study. University research laboratory. Ten men aged 18-32 years. None. Fluorescence in situ hybridization was used to investigate sperm aneuploidy frequencies for chromosomes X, Y, 13, and 21 in serial semen samples collected over a period of 12-18 months. Intra-individual and inter-individual variations were investigated by comparing serial samples from the same donor and by comparing the donors with each other, respectively. Intra-individual variations were found in all 10 donors for at least one investigated chromosome; variations tended to be sporadic events affecting only one time point. Inter-individual variations were found for all chromosomes (except XX and YY disomy and disomy 21), with three men identified as stable variants, consistently producing higher levels of aneuploidy for at least one of the following aneuploidies: sex chromosome nullisomy; disomy 13, or diploidy. These results suggest that there are a number of factors and mechanisms that have the potential to sporadically or consistently affect sperm aneuploidy.
NASA Astrophysics Data System (ADS)
Song, Jungki; Heilmann, Ralf K.; Bruccoleri, Alexander R.; Hertz, Edward; Schatternburg, Mark L.
2017-08-01
We report progress toward developing a scanning laser reflection (LR) tool for alignment and period measurement of critical-angle transmission (CAT) gratings. It operates on a similar measurement principle as a tool built in 1994 which characterized period variations of grating facets for the Chandra X-ray Observatory. A specularly reflected beam and a first-order diffracted beam were used to record local period variations, surface slope variations, and grating line orientation. In this work, a normal-incidence beam was added to measure slope variations (instead of the angled-incidence beam). Since normal incidence reflection is not coupled with surface height change, it enables measurement of slope variations more accurately and, along with the angled-incidence beam, helps to reconstruct the surface figure (or tilt) map. The measurement capability of in-grating period variations was demonstrated by measuring test reflection grating (RG) samples that show only intrinsic period variations of the interference lithography process. Experimental demonstration for angular alignment of CAT gratings is also presented along with a custom-designed grating alignment assembly (GAA) testbed. All three angles were aligned to satisfy requirements for the proposed Arcus mission. The final measurement of roll misalignment agrees with the roll measurements performed at the PANTER x-ray test facility.
2015-01-01
Objectives The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Methods Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Results Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. Conclusions The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students. PMID:26520472
Exact Delaunay normalization of the perturbed Keplerian Hamiltonian with tesseral harmonics
NASA Astrophysics Data System (ADS)
Mahajan, Bharat; Vadali, Srinivas R.; Alfriend, Kyle T.
2018-03-01
A novel approach for the exact Delaunay normalization of the perturbed Keplerian Hamiltonian with tesseral and sectorial spherical harmonics is presented in this work. It is shown that the exact solution for the Delaunay normalization can be reduced to quadratures by the application of Deprit's Lie-transform-based perturbation method. Two different series representations of the quadratures, one in powers of the eccentricity and the other in powers of the ratio of the Earth's angular velocity to the satellite's mean motion, are derived. The latter series representation produces expressions for the short-period variations that are similar to those obtained from the conventional method of relegation. Alternatively, the quadratures can be evaluated numerically, resulting in more compact expressions for the short-period variations that are valid for an elliptic orbit with an arbitrary value of the eccentricity. Using the proposed methodology for the Delaunay normalization, generalized expressions for the short-period variations of the equinoctial orbital elements, valid for an arbitrary tesseral or sectorial harmonic, are derived. The result is a compact unified artificial satellite theory for the sub-synchronous and super-synchronous orbit regimes, which is nonsingular for the resonant orbits, and is closed-form in the eccentricity as well. The accuracy of the proposed theory is validated by comparison with numerical orbit propagations.
Bruzzoni-Giovanelli, Heriberto; Fernandez, Plinio; Veiga, Lucía; Podgorniak, Marie-Pierre; Powell, Darren J; Candeias, Marco M; Mourah, Samia; Calvo, Fabien; Marín, Mónica
2010-02-09
SIAH proteins are the human members of an highly conserved family of E3 ubiquitin ligases. Several data suggest that SIAH proteins may have a role in tumor suppression and apoptosis. Previously, we reported that SIAH-1 induces the degradation of Kid (KIF22), a chromokinesin protein implicated in the normal progression of mitosis and meiosis, by the ubiquitin proteasome pathway. In human breast cancer cells stably transfected with SIAH-1, Kid/KIF22 protein level was markedly reduced whereas, the Kid/KIF22 mRNA level was increased. This interaction has been further elucidated through analyzing SIAH and Kid/KIF22 expression in both paired normal and tumor tissues and cell lines. It was observed that SIAH-1 protein is widely expressed in different normal tissues, and in cells lines but showing some differences in western blotting profiles. Immunofluorescence microscopy shows that the intracellular distribution of SIAH-1 and Kid/KIF22 appears to be modified in human tumor tissues compared to normal controls. When mRNA expression of SIAH-1 and Kid/KIF22 was analyzed by real-time PCR in normal and cancer breast tissues from the same patient, a large variation in the number of mRNA copies was detected between the different samples. In most cases, SIAH-1 mRNA is decreased in tumor tissues compared to their normal counterparts. Interestingly, in all breast tumor tissues analyzed, variations in the Kid/KIF22 mRNA levels mirrored those seen with SIAH-1 mRNAs. This concerted variation of SIAH-1 and Kid/KIF22 messengers suggests the existence of an additional level of control than the previously described protein-protein interaction and protein stability regulation. Our observations also underline the need to re-evaluate the results of gene expression obtained by qRT-PCR and relate it to the protein expression and cellular localization when matched normal and tumoral tissues are analyzed.
Matyas, J R; Huang, D; Adams, M E
1999-01-01
Several approaches are commonly used to normalize variations in RNA loading on Northern blots, including: ethidium bromide (EthBr) fluorescence of 18S or 28S rRNA or autoradiograms of radioactive probes hybridized with constitutively expressed RNAs such as elongation factor-1alpha (ELF), glyceraldehyde-3-phosphate dehydrogenase (G3PDH), actin, 18S or 28S rRNA, or others. However, in osteoarthritis (OA) the amount of total RNA changes significantly and none of these RNAs has been clearly demonstrated to be expressed at a constant level, so it is unclear if any of these approaches can be used reliably for normalizing RNA extracted from osteoarthritic cartilage. Total RNA was extracted from normal and osteoarthritic cartilage and assessed by EthBr fluorescence. RNA was then transferred to a nylon membrane hybridized with radioactive probes for ELF, G3PDH, Max, actin, and an oligo-dT probe. The autoradiographic signal across the six lanes of a gel was quantified by scanning densitometry. When compared on the basis of total RNA, the coefficient of variation was lowest for 28S ethidium bromide fluorescence and oligo-dT (approximately 7%), followed by 18S ethidium bromide fluorescence and G3PDH (approximately 13%). When these values were normalized to DNA concentration, the coefficient of variation exceeded 50% for all signals. Total RNA and the signals for 18S, 28S rRNA, and oligo-dT all correlated highly. These data indicate that osteoarthritic chondrocytes express similar ratios of mRNA to rRNA and mRNA to total RNA as do normal chondrocytes. Of all the "housekeeping" probes, G3PDH correlated best with the measurements of RNA. All of these "housekeeping" probes are expressed at greater levels by osteoarthritic chondrocytes when compared with normal chondrocytes. Thus, while G3PDH is satisfactory for evaluating the amount of RNA loaded, its level of expression is not the same in normal and osteoarthritic chondrocytes.
Christensen, Neil I; Forrest, Lisa J; White, Pamela J; Henzler, Margaret; Turek, Michelle M
2016-11-01
Contouring variability is a significant barrier to the accurate delivery and reporting of radiation therapy. The aim of this descriptive study was to determine the variation in contouring radiation targets and organs at risk by participants within our institution. Further, we also aimed to determine if all individuals contoured the same normal tissues. Two canine nasal tumor datasets were selected and contoured by two ACVR-certified radiation oncologists and two radiation oncology residents from the same institution. Eight structures were consistently contoured including the right and left eye, the right and left lens, brain, the gross tumor volume (GTV), clinical target volume (CTV), and planning target volume (PTV). Spinal cord, hard and soft palate, and bulla were contoured on 50% of datasets. Variation in contouring occurred in both targets and normal tissues at risk and was particularly significant for the GTV, CTV, and PTV. The mean metric score and dice similarity coefficient were below the threshold criteria in 37.5-50% and 12.5-50% of structures, respectively, quantitatively indicating contouring variation. This study refutes our hypothesis that minimal variation in target and normal tissue delineation occurs. The variation in contouring may contribute to different tumor response and toxicity for any given patient. Our results also highlight the difficulty associated with replication of published radiation protocols or treatments, as even with replete contouring description the outcome of treatment is still fundamentally influenced by the individual contouring the patient. © 2016 American College of Veterinary Radiology.
Feasibility of novel four degrees of freedom capacitive force sensor for skin interface force
2012-01-01
Background The objective of our study was to develop a novel capacitive force sensor that enables simultaneous measurements of yaw torque around the pressure axis and normal force and shear forces at a single point for the purpose of elucidating pressure ulcer pathogenesis and establishing criteria for selection of cushions and mattresses. Methods Two newly developed sensors (approximately 10 mm×10 mm×5 mm (10) and 20 mm×20 mm×5 mm (20)) were constructed from silicone gel and four upper and lower electrodes. The upper and lower electrodes had sixteen combinations that had the function as capacitors of parallel plate type. The full scale (FS) ranges of force/torque were defined as 0–1.5 N, –0.5-0.5 N and −1.5-1.5 N mm (10) and 0–8.7 N, –2.9-2.9 N and −16.8-16.8 N mm (20) in normal force, shear forces and yaw torque, respectively. The capacitances of sixteen capacitors were measured by an LCR meter (AC1V, 100 kHz) when displacements corresponding to four degrees of freedom (DOF) forces within FS ranges were applied to the sensor. The measurement was repeated three times in each displacement condition (10 only). Force/torque were calculated by corrected capacitance and were evaluated by comparison to theoretical values and standard normal force measured by an universal tester. Results In measurements of capacitance, the coefficient of variation was 3.23% (10). The Maximum FS errors of estimated force/torque were less than or equal to 10.1 (10) and 16.4% (20), respectively. The standard normal forces were approximately 1.5 (10) and 9.4 N (20) when pressure displacements were 3 (10) and 2 mm (20), respectively. The estimated normal forces were approximately 1.5 (10) and 8.6 N (10) in the same condition. Conclusions In this study, we developed a new four DOF force sensor for measurement of force/torque that occur between the skin and a mattress. In measurement of capacitance, the repeatability was good and it was confirmed that the sensor had characteristics that enabled the correction by linear approximation for adjustment of gain and offset. In estimation of forces/torque, we considered accuracy to be within an acceptable range. PMID:23186069
Potiaumpai, Melanie; Martins, Maria Carolina Massoni; Wong, Claudia; Desai, Trusha; Rodriguez, Roberto; Mooney, Kiersten; Signorile, Joseph F
2017-02-01
To compare the difference in muscle activation between high-speed yoga and standard-speed yoga and to compare muscle activation of the transitions between poses and the held phases of a yoga pose. Randomized sequence crossover trial SETTING: A laboratory of neuromuscular research and active aging Interventions: Eight minutes of continuous Sun Salutation B was performed, at a high speed versus a standard-speed, separately. Electromyography was used to quantify normalized muscle activation patterns of eight upper and lower body muscles (pectoralis major, medial deltoids, lateral head of the triceps, middle fibers of the trapezius, vastus medialis, medial gastrocnemius, thoracic extensor spinae, and external obliques) during the high-speed and standard-speed yoga protocols. Difference in normalized muscle activation between high-speed yoga and standard-speed yoga. Normalized muscle activity signals were significantly higher in all eight muscles during the transition phases of poses compared to the held phases (p<0.01). There was no significant interaction between speed×phase; however, greater normalized muscle activity was seen for highspeed yoga across the entire session. Our results show that transitions from one held phase of a pose to another produces higher normalized muscle activity than the held phases of the poses and that overall activity is greater during highspeed yoga than standard-speed yoga. Therefore, the transition speed and associated number of poses should be considered when targeting specific improvements in performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Laughney, Ashley; Krishnaswamy, Venkat; Schwab, Mary; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.
2009-02-01
The purpose of this study was to extract scatter parameters related to tissue ultra-structures from freshly excised breast tissue and to assess whether evident changes in scatter across diagnostic categories is primarily influenced by variation in the composition of each tissues subtypes or by physical remodeling of the extra-cellular environment. Pathologists easily distinguish between epithelium, stroma and adipose tissues, so this classification was adopted for macroscopic subtype classification. Micro-sampling reflectance spectroscopy was used to characterize single-backscattered photons from fresh, excised tumors and normal reduction specimens with sub-millimeter resolution. Phase contrast microscopy (sub-micron resolution) was used to characterize forward-scattered light through frozen tissue from the DHMC Tissue Bank, representing normal, benign and malignant breast tissue, sectioned at 10 microns. The packing density and orientation of collagen fibers in the extracellular matrix (ECM) associated with invasive, normal and benign epithelium was evaluated using transmission electron microscopy (TEM). Regions of interest (ROIs) in the H&E stained tissues were identified for analysis, as outlined by a pathologist as the gold standard. We conclude that the scatter parameters associated with tumor specimens (Npatients=6, Nspecimens=13) significantly differs from that of normal reductions (Npatients=6, Nspecimens=10). Further, tissue subtypes may be identified by their scatter spectra at sub-micron resolution. Stromal tissue scatters significantly more than the epithelial cells embedded in its ECM and adipose tissue scatters much less. However, the scatter signature of the stroma at the sub-micron level is not particularly differentiating in terms of a diagnosis.
Correlation to FVIII:C in Two Thrombin Generation Tests: TGA-CAT and INNOVANCE ETP.
Ljungkvist, Marcus; Berndtsson, Maria; Holmström, Margareta; Mikovic, Danijela; Elezovic, Ivo; Antovic, Jovan P; Zetterberg, Eva; Berntorp, Erik
2017-01-01
Several thrombin-generation tests are available, but few have been directly compared. Our primary aim was to investigate the correlation of two thrombin generation tests, thrombin generation assay-calibrated automated thrombogram (TGA-CAT) and INNOVANCE ETP, to factor VIII levels (FVIII:C) in a group of patients with hemophilia A. The secondary aim was to investigate inter-laboratory variation for the TGA-CAT method. Blood samples were taken from 45 patients with mild, moderate and severe hemophilia A. The TGA-CAT method was performed at both centers while the INNOVANCE ETP was only performed at the Stockholm center. Correlation between parameters was evaluated using Spearman's rank correlation test. For determination of the TGA-CAT inter-laboratory variability, Bland-Altman plots were used. The correlation for the INNOVANCE ETP and TGA-CAT methods with FVIII:C in persons with hemophilia (PWH) was r=0.701 and r=0.734 respectively.The correlation between the two methods was r=0.546.When dividing the study material into disease severity groups (mild, moderate and severe) based on FVIII levels, both methods fail to discriminate between them.The variability of the TGA-CAT results performed at the two centers was reduced after normalization; before normalization, 29% of values showed less than ±10% difference while after normalization the number increased to 41%. Both methods correlate in an equal manner to FVIII:C in PWH but show a poor correlation with each other. The level of agreement for the TGA-CAT method was poor though slightly improved after normalization of data. Further improvement of standardization of these methods is warranted.
NASA Astrophysics Data System (ADS)
Selvadurai, Paul A.; Glaser, Steven D.; Parker, Jessica M.
2017-03-01
Spatial variations in frictional properties on natural faults are believed to be a factor influencing the presence of slow slip events (SSEs). This effect was tested on a laboratory frictional interface between two polymethyl methacrylate (PMMA) bodies. We studied the evolution of slip and slip rates that varied systematically based on the application of both high and low normal stress (σ0=0.8 or 0.4 MPa) and the far-field loading rate (VLP). A spontaneous, frictional rupture expanded from the central, weaker, and more compliant section of the fault that had fewer asperities. Slow rupture propagated at speeds Vslow˜0.8 to 26 mm s-1 with slip rates from 0.01 to 0.2 μm s-1, resulting in stress drops around 100 kPa. During certain nucleation sequences, the fault experienced a partial stress drop, referred to as precursor detachment fronts in tribology. Only at the higher level of normal stress did these fronts exist, and the slip and slip rates mimicked the moment and moment release rates during the 2013-2014 Boso SSE in Japan. The laboratory detachment fronts showed rupture propagation speeds Vslow/VR∈ (5 to 172) × 10-7 and stress drops ˜ 100 kPa, which both scaled to the aforementioned SSE. Distributions of asperities, measured using a pressure sensitive film, increased in complexity with additional normal stress—an increase in normal stress caused added complexity by increasing both the mean size and standard deviation of asperity distributions, and this appeared to control the presence of the detachment front.
Mode instability in one-dimensional anharmonic lattices: Variational equation approach
NASA Astrophysics Data System (ADS)
Yoshimura, K.
1999-03-01
The stability of normal mode oscillations has been studied in detail under the single-mode excitation condition for the Fermi-Pasta-Ulam-β lattice. Numerical experiments indicate that the mode stability depends strongly on k/N, where k is the wave number of the initially excited mode and N is the number of degrees of freedom in the system. It has been found that this feature does not change when N increases. We propose an average variational equation - approximate version of the variational equation - as a theoretical tool to facilitate a linear stability analysis. It is shown that this strong k/N dependence of the mode stability can be explained from the view point of the linear stability of the relevant orbits. We introduce a low-dimensional approximation of the average variational equation, which approximately describes the time evolution of variations in four normal mode amplitudes. The linear stability analysis based on this four-mode approximation demonstrates that the parametric instability mechanism plays a crucial role in the strong k/N dependence of the mode stability.
Marzok, Mohamed A; Badawy, Adel M; El-Khodery, Sabry A
2017-05-01
To determine the normal values and repeatability for Schirmer tear test (STT) in clinically normal dromedary camels and to analyze the influence of the age and gender on these values. Thirty clinically normal dromedary camels of different ages (calves, immature, and mature). Schirmer tear tests I and II were performed using commercial STT strips. Three measurements were obtained from each eye over three consecutive weeks, and the variance of these measurements was determined. Mean values and coefficient of variation of STT I and STT II for the right and left eyes varied significantly among camel groups (P < 0.05). For STT I, the most frequently recorded values were >14-18, > 22-26, and >30-34 mm/min in calves, immature camels, and mature camels, respectively. For STT II, however, the most frequently recorded values were 7-14, >10-18, and >26-30 mm/min, respectively. The interassay coefficients of variation were 1.7-14.4% and were significantly lower in mature camels than in calves and immature camels (P < 0.05). Age was positively correlated with STT I (r = 0.81) and STT II values (r = 0.88). No significant variations were found between genders. This preliminary study reports STT I and II values and repeatability in normal dromedary camels. This information may assist veterinary practitioners in complete ophthalmic examinations and in accurate diagnosis of ocular surface diseases affecting the tear film in this species. © 2016 American College of Veterinary Ophthalmologists.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
NASA Astrophysics Data System (ADS)
Urata, Yumi; Kuge, Keiko; Kase, Yuko
2015-02-01
Phase transitions of pore water have never been considered in dynamic rupture simulations with thermal pressurization (TP), although they may control TP. From numerical simulations of dynamic rupture propagation including TP, in the absence of any water phase transition process, we predict that frictional heating and TP are likely to change liquid pore water into supercritical water for a strike-slip fault under depth-dependent stress. This phase transition causes changes of a few orders of magnitude in viscosity, compressibility, and thermal expansion among physical properties of water, thus affecting the diffusion of pore pressure. Accordingly, we perform numerical simulations of dynamic ruptures with TP, considering physical properties that vary with the pressure and temperature of pore water on a fault. To observe the effects of the phase transition, we assume uniform initial stress and no fault-normal variations in fluid density and viscosity. The results suggest that the varying physical properties decrease the total slip in cases with high stress at depth and small shear zone thickness. When fault-normal variations in fluid density and viscosity are included in the diffusion equation, they activate TP much earlier than the phase transition. As a consequence, the total slip becomes greater than that in the case with constant physical properties, eradicating the phase transition effect. Varying physical properties do not affect the rupture velocity, irrespective of the fault-normal variations. Thus, the phase transition of pore water has little effect on dynamic ruptures. Fault-normal variations in fluid density and viscosity may play a more significant role.
Uçar, Murat; Altok, Muammer; Umul, Mehmet; Bayram, Dilek; Armağan, İlkay; Güneş, Mustafa; Çapkin, Tahsin; Soyupek, Sedat
2016-01-01
To investigate the effects of thermochemotherapy with mitomycin C (MMC) on normal rabbit bladder urothelium and to compare it with standard intravesical MMC and hyperthermia with normal saline. Twenty-four male New Zealand rabbits, with a mean weight of 2.7 kg (in weight of 2.1–4.3 kg), were divided into three groups, each containing eight rabbits. Thermotherapy with only normal saline was performed in the first group, standard intravesical MMC was performed in the second group, and thermotherapy with MMC was performed in the last group. A week after the primary procedure, total cystectomy was performed and tissue samples were evaluated. The presence of epithelial vacuolar degeneration (p = 0.001), epithelial hyperplasia (p = 0.000), subepithelial fibrosis (p = 0.001) and hemorrhagic areas in the connective tissue (p = 0.002) was observed statistically significantly higher in the standard MMC group than in thermotherapy with normal saline group. There was almost a significant difference among standard MMC and normal saline group in terms of vascular congestion in the connective tissue (p = 0.08). Presence of epithelial vacuolar degeneration (p = 0.002), epithelial hyperplasia (p = 0.002), subepithelial fibrosis (p = 0.030), hemorrhagic areas (p = 0.011) and vascular congestion (p = 0.36) in the connective tissue was observed statistically significantly higher in the thermochemotherapy with MMC group than in standard intravesical MMC group. Polymorphonuclear cell infiltration was not considerable in any of the groups, and there was no significant difference between each groups (p = 0.140). Administration of intravesical MMC causes a toxic effect on the normal urothelium of the bladder rather than an inflammatory reaction. Heating MMC significantly increased this effect.
The Effect of Viewing Eccentricity on Enumeration
Palomares, Melanie; Smith, Paul R.; Pitts, Carole Holley; Carter, Breana M.
2011-01-01
Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities. PMID:21695212
The effect of viewing eccentricity on enumeration.
Palomares, Melanie; Smith, Paul R; Pitts, Carole Holley; Carter, Breana M
2011-01-01
Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities.
Evaluation of body-wise and organ-wise registrations for abdominal organs
NASA Astrophysics Data System (ADS)
Xu, Zhoubing; Panjwani, Sahil A.; Lee, Christopher P.; Burke, Ryan P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.
2016-03-01
Identifying cross-sectional and longitudinal correspondence in the abdomen on computed tomography (CT) scans is necessary for quantitatively tracking change and understanding population characteristics, yet abdominal image registration is a challenging problem. The key difficulty in solving this problem is huge variations in organ dimensions and shapes across subjects. The current standard registration method uses the global or body-wise registration technique, which is based on the global topology for alignment. This method (although producing decent results) has substantial influence of outliers, thus leaving room for significant improvement. Here, we study a new image registration approach using local (organ-wise registration) by first creating organ-specific bounding boxes and then using these regions of interest (ROIs) for aligning references to target. Based on Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD) and Hausdorff Distance (HD), the organ-wise approach is demonstrated to have significantly better results by minimizing the distorting effects of organ variations. This paper compares exclusively the two registration methods by providing novel quantitative and qualitative comparison data and is a subset of the more comprehensive problem of improving the multi-atlas segmentation by using organ normalization.
A Pipeline for High-Throughput Concentration Response Modeling of Gene Expression for Toxicogenomics
House, John S.; Grimm, Fabian A.; Jima, Dereje D.; Zhou, Yi-Hui; Rusyn, Ivan; Wright, Fred A.
2017-01-01
Cell-based assays are an attractive option to measure gene expression response to exposure, but the cost of whole-transcriptome RNA sequencing has been a barrier to the use of gene expression profiling for in vitro toxicity screening. In addition, standard RNA sequencing adds variability due to variable transcript length and amplification. Targeted probe-sequencing technologies such as TempO-Seq, with transcriptomic representation that can vary from hundreds of genes to the entire transcriptome, may reduce some components of variation. Analyses of high-throughput toxicogenomics data require renewed attention to read-calling algorithms and simplified dose–response modeling for datasets with relatively few samples. Using data from induced pluripotent stem cell-derived cardiomyocytes treated with chemicals at varying concentrations, we describe here and make available a pipeline for handling expression data generated by TempO-Seq to align reads, clean and normalize raw count data, identify differentially expressed genes, and calculate transcriptomic concentration–response points of departure. The methods are extensible to other forms of concentration–response gene-expression data, and we discuss the utility of the methods for assessing variation in susceptibility and the diseased cellular state. PMID:29163636
Stochastic approach to the derivation of emission limits for wastewater treatment plants.
Stransky, D; Kabelkova, I; Bares, V
2009-01-01
Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.
NASA Astrophysics Data System (ADS)
Rohaeti, Eti; Rafi, Mohamad; Syafitri, Utami Dyah; Heryanto, Rudi
2015-02-01
Turmeric (Curcuma longa), java turmeric (Curcuma xanthorrhiza) and cassumunar ginger (Zingiber cassumunar) are widely used in traditional Indonesian medicines (jamu). They have similar color for their rhizome and possess some similar uses, so it is possible to substitute one for the other. The identification and discrimination of these closely-related plants is a crucial task to ensure the quality of the raw materials. Therefore, an analytical method which is rapid, simple and accurate for discriminating these species using Fourier transform infrared spectroscopy (FTIR) combined with some chemometrics methods was developed. FTIR spectra were acquired in the mid-IR region (4000-400 cm-1). Standard normal variate, first and second order derivative spectra were compared for the spectral data. Principal component analysis (PCA) and canonical variate analysis (CVA) were used for the classification of the three species. Samples could be discriminated by visual analysis of the FTIR spectra by using their marker bands. Discrimination of the three species was also possible through the combination of the pre-processed FTIR spectra with PCA and CVA, in which CVA gave clearer discrimination. Subsequently, the developed method could be used for the identification and discrimination of the three closely-related plant species.
NASA Astrophysics Data System (ADS)
Jochimsen, Thies H.; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-01
This study explores the possibility of using simultaneous positron emission tomography—magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of 18F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Jochimsen, Thies H; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama
2015-06-21
This study explores the possibility of using simultaneous positron emission tomography--magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of (18)F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.
Fierro-Monti, Ivo; Racle, Julien; Hernandez, Celine; Waridel, Patrice; Hatzimanikatis, Vassily; Quadroni, Manfredo
2013-01-01
Standard proteomics methods allow the relative quantitation of levels of thousands of proteins in two or more samples. While such methods are invaluable for defining the variations in protein concentrations which follow the perturbation of a biological system, they do not offer information on the mechanisms underlying such changes. Expanding on previous work [1], we developed a pulse-chase (pc) variant of SILAC (stable isotope labeling by amino acids in cell culture). pcSILAC can quantitate in one experiment and for two conditions the relative levels of proteins newly synthesized in a given time as well as the relative levels of remaining preexisting proteins. We validated the method studying the drug-mediated inhibition of the Hsp90 molecular chaperone, which is known to lead to increased synthesis of stress response proteins as well as the increased decay of Hsp90 “clients”. We showed that pcSILAC can give information on changes in global cellular proteostasis induced by treatment with the inhibitor, which are normally not captured by standard relative quantitation techniques. Furthermore, we have developed a mathematical model and computational framework that uses pcSILAC data to determine degradation constants kd and synthesis rates Vs for proteins in both control and drug-treated cells. The results show that Hsp90 inhibition induced a generalized slowdown of protein synthesis and an increase in protein decay. Treatment with the inhibitor also resulted in widespread protein-specific changes in relative synthesis rates, together with variations in protein decay rates. The latter were more restricted to individual proteins or protein families than the variations in synthesis. Our results establish pcSILAC as a viable workflow for the mechanistic dissection of changes in the proteome which follow perturbations. Data are available via ProteomeXchange with identifier PXD000538. PMID:24312217
Blood lipid measurements. Variations and practical utility.
Cooper, G R; Myers, G L; Smith, S J; Schlant, R C
1992-03-25
To describe the magnitude and impact of the major biological and analytical sources of variation in serum lipid and lipoprotein levels on risk of coronary heart disease; to present a way to qualitatively estimate the total intraindividual variation; and to demonstrate how to determine the number of specimens required to estimate, with 95% confidence, the "true" underlying total cholesterol value in the serum of a patient. Representative references on each source of variation were selected from more than 300 reviewed publications, most published within the past 5 years, to document current findings and concepts. Most articles reviewed were in English. Studies on biological sources of variation were selected using the following criteria: representative of published findings, clear statement of either significant or insignificant results, and acquisition of clinical and laboratory data under standardized conditions. Representative results for special populations such as women and children are reported when results differ from those of adult men. References were selected based on acceptable experimental design and use of standardized laboratory lipid measurements. The lipid levels considered representative for a selected source of variation arose from quantitative measurements by a suitably standardized laboratory. Statistical analysis of data was examined to assure reliability. The proposed method of estimating the biological coefficient of variation must be considered to give qualitative results, because only two or three serial specimens are collected in most cases for the estimation. Concern has arisen about the magnitude, impact, and interpretation of preanalytical as well as analytical sources of variation on reported results of lipid measurements of an individual. Preanalytical sources of variation from behavioral, clinical, and sampling sources constitute about 60% of the total variation in a reported lipid measurement of an individual. A technique is presented to allow physicians to qualitatively estimate the intraindividual biological variation of a patient from the results of two or more specimens reported from a standardized laboratory and to determine whether additional specimens are needed to meet the National Cholesterol Education Program recommendation that the intraindividual serum total cholesterol coefficient of variation not exceed 5.0. A National Reference Method Network has been established to help solve analytical problems.
Isotopic compositions of the elements, 2001
Böhlke, J.K.; De Laeter, J. R.; De Bievre, P.; Hidaka, H.; Peiser, H.S.; Rosman, K.J.R.; Taylor, P.D.P.
2005-01-01
The Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry completed its last review of the isotopic compositions of the elements as determined by isotope-ratio mass spectrometry in 2001. That review involved a critical evaluation of the published literature, element by element, and forms the basis of the table of the isotopic compositions of the elements (TICE) presented here. For each element, TICE includes evaluated data from the “best measurement” of the isotope abundances in a single sample, along with a set of representative isotope abundances and uncertainties that accommodate known variations in normal terrestrial materials. The representative isotope abundances and uncertainties generally are consistent with the standard atomic weight of the element Ar(E)">Ar(E)Ar(E) and its uncertainty U[Ar(E)]">U[Ar(E)]U[Ar(E)] recommended by CAWIA in 2001.
Brulle, Franck; Bernard, Fabien; Vandenbulcke, Franck; Cuny, Damien; Dumez, Sylvain
2014-04-01
Real-time quantitative PCR is nowadays a standard method to study gene expression variations in various samples and experimental conditions. However, to interpret results accurately, data normalization with appropriate reference genes appears to be crucial. The present study describes the identification and the validation of suitable reference genes in Brassica oleracea leaves. Expression stability of eight candidates was tested following drought and cold abiotic stresses by using three different softwares (BestKeeper, NormFinder and geNorm). Four genes (BolC.TUB6, BolC.SAND1, BolC.UBQ2 and BolC.TBP1) emerged as the most stable across the tested conditions. Further gene expression analysis of a drought- and a cold-responsive gene (BolC.DREB2A and BolC.ELIP, respectively), confirmed the stability and the reliability of the identified reference genes when used for normalization in the leaves of B. oleracea. These four genes were finally tested upon a benzene exposure and all appeared to be useful reference genes along this toxicological condition. These results provide a good starting point for future studies involving gene expression measurement on leaves of B. oleracea exposed to environmental modifications.
A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction
NASA Technical Reports Server (NTRS)
Smith, Samantha A.; DelGenio, Anthony D.
1998-01-01
A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.
NASA Astrophysics Data System (ADS)
Rachmawati; Rohaeti, E.; Rafi, M.
2017-05-01
Taro flour on the market is usually sold at higher price than wheat and sago flour. This situation could be a cause for adulteration of taro flour from wheat and sago flour. For this reason, we will need an identification and authentication. Combination of near infrared (NIR) spectrum with multivariate analysis was used in this study to identify and authenticate taro flour from wheat and sago flour. The authentication model of taro flour was developed by using a mixture of 5%, 25%, and 50% of adulterated taro flour from wheat and sago flour. Before subjected to multivariate analysis, an initial preprocessing signal was used namely normalization and standard normal variate to the NIR spectrum. We used principal component analysis followed by discriminant analysis to make an identification and authentication model of taro flour. From the result obtained, about 90.48% of the taro flour mixed with wheat flour and 85% of taro flour mixed with sago flour were successfully classified into their groups. So the combination of NIR spectrum with chemometrics could be used for identification and authentication of taro flour from wheat and sago flour.
Packiriswamy, Vasanthakumar; Kumar, Pramod; Rao, Mohandas
2012-12-01
The "golden ratio" is considered as a universal facial aesthetical standard. Researcher's opinion that deviation from golden ratio can result in development of facial abnormalities. This study was designed to study the facial morphology and to identify individuals with normal, short, and long face. We studied 300 Malaysian nationality subjects aged 18-28 years of Chinese, Indian, and Malay extraction. The parameters measured were physiognomical facial height and width of face, and physiognomical facial index was calculated. Face shape was classified based on golden ratio. Independent t test was done to test the difference between sexes and among the races. The mean values of the measurements and index showed significant sexual and interracial differences. Out of 300 subjects, the face shape was normal in 60 subjects, short in 224 subjects, and long in 16 subjects. As anticipated, the measurements showed variations according to gender and race. Only 60 subjects had a regular face shape, and remaining 240 subjects had irregular face shape (short and long). Since the short and long shape individuals may be at risk of developing various disorders, the knowledge of facial shapes in the given population is important for early diagnostic and treatment procedures.
Nakajima, Kenichi; Matsumoto, Naoya; Kasai, Tokuo; Matsuo, Shinro; Kiso, Keisuke; Okuda, Koichi
2016-04-01
As a 2-year project of the Japanese Society of Nuclear Medicine working group activity, normal myocardial imaging databases were accumulated and summarized. Stress-rest with gated and non-gated image sets were accumulated for myocardial perfusion imaging and could be used for perfusion defect scoring and normal left ventricular (LV) function analysis. For single-photon emission computed tomography (SPECT) with multi-focal collimator design, databases of supine and prone positions and computed tomography (CT)-based attenuation correction were created. The CT-based correction provided similar perfusion patterns between genders. In phase analysis of gated myocardial perfusion SPECT, a new approach for analyzing dyssynchrony, normal ranges of parameters for phase bandwidth, standard deviation and entropy were determined in four software programs. Although the results were not interchangeable, dependency on gender, ejection fraction and volumes were common characteristics of these parameters. Standardization of (123)I-MIBG sympathetic imaging was performed regarding heart-to-mediastinum ratio (HMR) using a calibration phantom method. The HMRs from any collimator types could be converted to the value with medium-energy comparable collimators. Appropriate quantification based on common normal databases and standard technology could play a pivotal role for clinical practice and researches.
Mann, Theresa N; Lamberts, Robert P; Lambert, Michael I
2014-08-01
The response to an exercise intervention is often described in general terms, with the assumption that the group average represents a typical response for most individuals. In reality, however, it is more common for individuals to show a wide range of responses to an intervention rather than a similar response. This phenomenon of 'high responders' and 'low responders' following a standardized training intervention may provide helpful insights into mechanisms of training adaptation and methods of training prescription. Therefore, the aim of this review was to discuss factors associated with inter-individual variation in response to standardized, endurance-type training. It is well-known that genetic influences make an important contribution to individual variation in certain training responses. The association between genotype and training response has often been supported using heritability estimates; however, recent studies have been able to link variation in some training responses to specific single nucleotide polymorphisms. It would appear that hereditary influences are often expressed through hereditary influences on the pre-training phenotype, with some parameters showing a hereditary influence in the pre-training phenotype but not in the subsequent training response. In most cases, the pre-training phenotype appears to predict only a small amount of variation in the subsequent training response of that phenotype. However, the relationship between pre-training autonomic activity and subsequent maximal oxygen uptake response appears to show relatively stronger predictive potential. Individual variation in response to standardized training that cannot be explained by genetic influences may be related to the characteristics of the training program or lifestyle factors. Although standardized programs usually involve training prescribed by relative intensity and duration, some methods of relative exercise intensity prescription may be more successful in creating an equivalent homeostatic stress between individuals than other methods. Individual variation in the homeostatic stress associated with each training session would result in individuals experiencing a different exercise 'stimulus' and contribute to individual variation in the adaptive responses incurred over the course of the training program. Furthermore, recovery between the sessions of a standardized training program may vary amongst individuals due to factors such as training status, sleep, psychological stress, and habitual physical activity. If there is an imbalance between overall stress and recovery, some individuals may develop fatigue and even maladaptation, contributing to variation in pre-post training responses. There is some evidence that training response can be modulated by the timing and composition of dietary intake, and hence nutritional factors could also potentially contribute to individual variation in training responses. Finally, a certain amount of individual variation in responses may also be attributed to measurement error, a factor that should be accounted for wherever possible in future studies. In conclusion, there are several factors that could contribute to individual variation in response to standardized training. However, more studies are required to help clarify and quantify the role of these factors. Future studies addressing such topics may aid in the early prediction of high or low training responses and provide further insight into the mechanisms of training adaptation.
Validation of the Filovirus Plaque Assay for Use in Preclinical Studies
Shurtleff, Amy C.; Bloomfield, Holly A.; Mort, Shannon; Orr, Steven A.; Audet, Brian; Whitaker, Thomas; Richards, Michelle J.; Bavari, Sina
2016-01-01
A plaque assay for quantitating filoviruses in virus stocks, prepared viral challenge inocula and samples from research animals has recently been fully characterized and standardized for use across multiple institutions performing Biosafety Level 4 (BSL-4) studies. After standardization studies were completed, Good Laboratory Practices (GLP)-compliant plaque assay method validation studies to demonstrate suitability for reliable and reproducible measurement of the Marburg Virus Angola (MARV) variant and Ebola Virus Kikwit (EBOV) variant commenced at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). The validation parameters tested included accuracy, precision, linearity, robustness, stability of the virus stocks and system suitability. The MARV and EBOV assays were confirmed to be accurate to ±0.5 log10 PFU/mL. Repeatability precision, intermediate precision and reproducibility precision were sufficient to return viral titers with a coefficient of variation (%CV) of ≤30%, deemed acceptable variation for a cell-based bioassay. Intraclass correlation statistical techniques for the evaluation of the assay’s precision when the same plaques were quantitated by two analysts returned values passing the acceptance criteria, indicating high agreement between analysts. The assay was shown to be accurate and specific when run on Nonhuman Primates (NHP) serum and plasma samples diluted in plaque assay medium, with negligible matrix effects. Virus stocks demonstrated stability for freeze-thaw cycles typical of normal usage during assay retests. The results demonstrated that the EBOV and MARV plaque assays are accurate, precise and robust for filovirus titration in samples associated with the performance of GLP animal model studies. PMID:27110807
Huang, Xiaoyan; Zhou, Yujie; Liu, Cui; Zhang, Ruilong; Zhang, Liying; Du, Shuhu; Liu, Bianhua; Han, Ming-Yong; Zhang, Zhongping
2016-12-15
Fluorescent test papers are promising for the wide applications in the assays of diagnosis, environments and foods, but unlike classical dye-absorption-based pH test paper, they are usually limited in the qualitative yes/no type of detection by fluorescent brightness, and the colorimetry-based quantification remains a challenging task. Here, we report a single dual-emissive nanofluorophore probe to achieve the consecutive color variations from blue to red for the quantification of blood glucose on its as-prepared test papers. Red quantum dots were embedded into silica nanoparticles as a stable internal standard emission, and blue carbon dots (CDs) were further covalently linked onto the surface of silica, in which the ratiometric fluorescence intensity of blue to red is controlled at 5:1. While the oxidation of glucose induced the formation of Fe(3+) ions, the blue emission of CDs was thus quenched by the electron transfer from CDs to Fe(3+), displaying a serial of consecutive color variations from blue to red with the dosage of glucose. The high-quality test papers printed by the probe ink exhibited a dosage-sensitive allochromatic capability with the clear differentiations of ~5, 7, 9, 11mM glucose in human serum (normal: 3-8mM). The blood glucose determined by the test paper was almost in accordance with that measured by a standard glucometer. The method reported here opens a window to the wide applications of fluorescent test paper in biological assays. Copyright © 2016 Elsevier B.V. All rights reserved.
Mohammadi, Shabnam; Hedjazi, Arya; Sajjadian, Maryam; Rahmani, Mahboobeh; Mohammadi, Maryam; Moghadam, Maliheh Dadgar
2017-03-29
The vermiform appendix is a worm like tube containing a large amount of lymphoid follicles. In our knowledge, there is a little standard data about the vermiform appendix in Iranian population. Therefore, the objective of this study was to investigate the normal appendix size in Iranian cadavers. A cross-sectional study was undertaken between June 2014 and July 2015, in the autopsy laboratory, Legal Medicine Organization, Razavi Khorasan province, Iran. A total of 693 cadavers with the mean age of 40.46±20.99 years were divided into 10 groups. After writing down position of the appendix, the length, diameter and weight of appendix were measured. Statistical analysis was performed using SPSS software. The mean values of the demographic characteristics included: age= 40.46 ± 20.99 years; weight = 63.47 ± 17.84 kg; height = 159.95 ± 28.23 cm. The mean values of the appendix length, diameter, weight and index in the cadavers were 8.52 ± 2.99 cm, 12.17 ± 4.53 mm, 6.43 ± 3.26 grams and 0.013 ± 0.01, respectively. The most common position of appendix was retrocecal in 71.7% of cases. A significant correlations were evident between the value of demographic data and appendix size (P<0.05). The diameter (P=0.002) and index of appendix (P=0.003) showed significant difference between males and females. Having standard data on the vermiform appendix is useful for clinicians as well as anthropologists. The findings of the present study can provide information about morphologic variations of the appendix in Iranian population.
Self-Study and Evaluation Guide; Section C-2; Financial Accounting and Service Reporting.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
In determining standards to judge an agency's performance in meeting its responsibilities to the public, two fundamental factors must be taken into consideration: (1) standards for financial accounting and service reporting must be formulated in the knowledge that variations among agencies will necessarily give rise to variations in applicability…
The Variation of Electrochemical Cell Potentials with Temperature
ERIC Educational Resources Information Center
Peckham, Gavin D.; McNaught, Ian J.
2011-01-01
Electrochemical cell potentials have no simple relationship with temperature but depend on the interplay between the sign and magnitude of the isothermal temperature coefficient, dE[degrees]/dT, and on the magnitude of the reaction quotient, Q. The variations in possible responses of standard and non-standard cell potentials to changes in the…
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
A normal mode treatment of semi-diurnal body tides on an aspherical, rotating and anelastic Earth
NASA Astrophysics Data System (ADS)
Lau, Harriet C. P.; Yang, Hsin-Ying; Tromp, Jeroen; Mitrovica, Jerry X.; Latychev, Konstantin; Al-Attar, David
2015-08-01
Normal mode treatments of the Earth's body tide response were developed in the 1980s to account for the effects of Earth rotation, ellipticity, anelasticity and resonant excitation within the diurnal band. Recent space-geodetic measurements of the Earth's crustal displacement in response to luni-solar tidal forcings have revealed geographical variations that are indicative of aspherical deep mantle structure, thus providing a novel data set for constraining deep mantle elastic and density structure. In light of this, we make use of advances in seismic free oscillation literature to develop a new, generalized normal mode theory for the tidal response within the semi-diurnal and long-period tidal band. Our theory involves a perturbation method that permits an efficient calculation of the impact of aspherical structure on the tidal response. In addition, we introduce a normal mode treatment of anelasticity that is distinct from both earlier work in body tides and the approach adopted in free oscillation seismology. We present several simple numerical applications of the new theory. First, we compute the tidal response of a spherically symmetric, non-rotating, elastic and isotropic Earth model and demonstrate that our predictions match those based on standard Love number theory. Second, we compute perturbations to this response associated with mantle anelasticity and demonstrate that the usual set of seismic modes adopted for this purpose must be augmented by a family of relaxation modes to accurately capture the full effect of anelasticity on the body tide response. Finally, we explore aspherical effects including rotation and we benchmark results from several illustrative case studies of aspherical Earth structure against independent finite-volume numerical calculations of the semi-diurnal body tide response. These tests confirm the accuracy of the normal mode methodology to at least the level of numerical error in the finite-volume predictions. They also demonstrate that full coupling of normal modes, rather than group coupling, is necessary for accurate predictions of the body tide response.
Barteselli, G; Gomez, M L; Doede, A L; Chhablani, J; Gutstein, W; Bartsch, D-U; Dustin, L; Azen, S P; Freeman, W R
2014-10-01
To evaluate visual function variations in eyes with age-related macular degeneration (AMD) compared to normal eyes under different light/contrast conditions using a time-dependent visual acuity testing instrument, the Central Vision Analyzer (CVA). Overall, 37 AMD eyes and 35 normal eyes were consecutively tested with the CVA after assessing best-corrected visual acuity (BCVA) using ETDRS charts. The CVA established visual thresholds for three mesopic environments (M1 (high contrast), M2 (medium contrast), and M3 (low contrast)) and three backlight-glare environments (G1 (high contrast, equivalent to ETDRS), G2 (medium contrast), and G3 (low contrast)) under timed conditions. Vision drop across environments was calculated, and repeatability of visual scores was determined. BCVA significantly reduced with decreasing contrast in all eyes. M1 scores for BCVA were greater than M2 and M3 (P<0.001); G1 scores were greater than G2 and G3 (P<0.01). BCVA dropped more in AMD eyes than in normal eyes between M1 and M2 (P=0.002) and between M1 and M3 (P=0.003). In AMD eyes, BCVA was better using ETDRS charts compared to G1 (P<0.001). The drop in visual function between ETDRS and G1 was greater in AMD eyes compared to normal eyes (P=0.004). Standard deviations of test-retest ranged from 0.100 to 0.139 logMAR. The CVA allowed analysis of the visual complaints that AMD patients experience with different lighting/contrast time-dependent conditions. BCVA changed significantly under different lighting/contrast conditions in all eyes, however, AMD eyes were more affected by contrast reduction than normal eyes. In AMD eyes, timed conditions using the CVA led to worse BCVA compared to non-timed ETDRS charts.
Normative Measurements of Grip and Pinch Strengths of 21st Century Korean Population
Shim, Jin Hee; Kim, Jin Soo; Lee, Dong Chul; Ki, Sae Hwi; Yang, Jae Won; Jeon, Man Kyung; Lee, Sang Myung
2013-01-01
Background Measuring grip and pinch strength is an important part of hand injury evaluation. Currently, there are no standardized values of normal grip and pinch strength among the Korean population, and lack of such data prevents objective evaluation of post-surgical recovery in strength. This study was designed to establish the normal values of grip and pinch strength among the healthy Korean population and to identify any dependent variables affecting grip and pinch strength. Methods A cross-sectional study was carried out. The inclusion criterion was being a healthy Korean person without a previous history of hand trauma. The grip strength was measured using a Jamar dynamometer. Pulp and key pinch strength were measured with a hydraulic pinch gauge. Intra-individual and inter-individual variations in these variables were analyzed in a standardized statistical manner. Results There were a total of 336 healthy participants between 13 and 77 years of age. As would be expected in any given population, the mean grip and pinch strength was greater in the right hand than the left. Male participants (137) showed mean strengths greater than female participants (199) when adjusted for age. Among the male participants, anthropometric variables correlated positively with grip strength, but no such correlations were identifiable in female participants in a statistically significant way. Conclusions Objective measurements of hand strength are an important component of hand injury evaluation, and population-specific normative data are essential for clinical and research purposes. This study reports updated normative hand strengths of the South Korean population in the 21st century. PMID:23362480
WE-G-18C-05: Characterization of Cross-Vendor, Cross-Field Strength MR Image Intensity Variations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, E; Prah, D
2014-06-15
Purpose: Variations in MR image intensity and image intensity nonuniformity (IINU) can challenge the accuracy of intensity-based image segmentation and registration algorithms commonly applied in radiotherapy. The goal of this work was to characterize MR image intensity variations across scanner vendors and field strengths commonly used in radiotherapy. Methods: ACR-MRI phantom images were acquired at 1.5T and 3.0T on GE (450w and 750, 23.1), Siemens (Espree and Verio, VB17B), and Philips (Ingenia, 4.1.3) scanners using commercial spin-echo sequences with matched parameters (TE/TR: 20/500 ms, rBW: 62.5 kHz, TH/skip: 5/5mm). Two radiofrequency (RF) coil combinations were used for each scanner: bodymore » coil alone, and combined body and phased-array head coils. Vendorspecific B1- corrections (PURE/Pre-Scan Normalize/CLEAR) were applied in all head coil cases. Images were transferred offline, corrected for IINU using the MNI N3 algorithm, and normalized. Coefficients of variation (CV=σ/μ) and peak image uniformity (PIU = 1−(Smax−Smin)/(Smax+Smin)) estimates were calculated for one homogeneous phantom slice. Kruskal-Wallis and Wilcoxon matched-pairs tests compared mean MR signal intensities and differences between original and N3 image CV and PIU. Results: Wide variations in both MR image intensity and IINU were observed across scanner vendors, field strengths, and RF coil configurations. Applying the MNI N3 correction for IINU resulted in significant improvements in both CV and PIU (p=0.0115, p=0.0235). However, wide variations in overall image intensity persisted, requiring image normalization to improve consistency across vendors, field strengths, and RF coils. These results indicate that B1- correction routines alone may be insufficient in compensating for IINU and image scaling, warranting additional corrections prior to use of MR images in radiotherapy. Conclusions: MR image intensities and IINU vary as a function of scanner vendor, field strength, and RF coil configuration. A two-step strategy consisting of MNI N3 correction followed by normalization was required to improve MR image consistency. Funding provided by Advancing a Healthier Wisconsin.« less
Bimler, David; Kirkland, John; Pichler, Shaun
2004-02-01
The structure of color perception can be examined by collecting judgments about color dissimilarities. In the procedure used here, stimuli are presented three at a time on a computer monitor and the spontaneous grouping of most-similar stimuli into gestalts provides the dissimilarity comparisons. Analysis with multidimensional scaling allows such judgments to be pooled from a number of observers without obscuring the variations among them. The anomalous perceptions of color-deficient observers produce comparisons that are represented well by a geometric model of compressed individual color spaces, with different forms of deficiency distinguished by different directions of compression. The geometrical model is also capable of accommodating the normal spectrum of variation, so that there is greater variation in compression parameters between tests on normal subjects than in those between repeated tests on individual subjects. The method is sufficiently sensitive and the variations sufficiently large that they are not obscured by the use of a range of monitors, even under somewhat loosely controlled conditions.
Normal Genetic Variation, Cognition, and Aging
Greenwood, P. M.; Parasuraman, Raja
2005-01-01
This article reviews the modulation of cognitive function by normal genetic variation. Although the heritability of “g” is well established, the genes that modulate specific cognitive functions are largely unidentified. Application of the allelic association approach to individual differences in cognition has begun to reveal the effects of single nucleotide polymorphisms on specific and general cognitive functions. This article proposes a framework for relating genotype to cognitive phenotype by considering the effect of genetic variation on the protein product of specific genes within the context of the neural basis of particular cognitive domains. Specificity of effects is considered, from genes controlling part of one receptor type to genes controlling agents of neuronal repair, and evidence is reviewed of cognitive modulation by polymorphisms in dopaminergic and cholinergic receptor genes, dopaminergic enzyme genes, and neurotrophic genes. Although allelic variation in certain genes can be reliably linked to cognition—specifically to components of attention, working memory, and executive function in healthy adults—the specificity, generality, and replicability of the effects are not fully known. PMID:15006290
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2014-03-01
It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative study with conventional methods (i.e., fixed reference scheme) demonstrates the superior performance of the proposed method for structural damage detection.
NASA Astrophysics Data System (ADS)
Kauffman, Chad Matthew
The temperature and precipitation that describe the norm of daily, monthly, and seasonal climate conditions are ``climate normals.'' They are usually calculated based on climate data covering a 30-year period, and updated in every 10 years. The next update will take place in year 2001. Because of the advent of the Automated Surface Observations Systems (ASOS) beginning in early 1990s and recognized temperature bias between ASOS and the conventional temperature sensors there is an uncertainty of how the ASOS data should be used to calculate the 1971-2000 temperature normal. This study examined the uncertainty and offered a method to minimize it. It showed that the ASOS bias has a measurable impact on the new 30-year temperature normal. The impact varies among stations and climate regions. Some stations with a cooling trend in ASOS temperature have a cooler normal for their temperature, while others with a warming trend have a warmer normal for temperature. These quantitative evaluations of ASOS effect for stations and regions can be used to reduce ASOS bias in temperature normals. This study also evaluated temperature normals for different length periods and compared them to the 30-year normal. It showed that the difference between the normals, is smaller in maritime climate than in continental temperate climate. In the former, the six- year normal describes a similar temperature variation as the 30-year normal does. In the latter, the 18-year normal starts to resemble the temperature variation that the 30-year normal describes. These results provide a theoretical basis for applying different normals in different regions. The study further compared temperature normal for different periods and identified a seasonal shift in climate change in the southwestern U.S. where the summer maximum temperature has shifted to a late summer month and the winter minimum temperature shifted to an early winter month in the past 30 years.
Ultrasonic investigation of granular materials subjected to compression and crushing.
Gheibi, Amin; Hedayat, Ahmadreza
2018-07-01
Ultrasonic wave propagation measurement has been used as a suitable technique for studying the granular materials and investigating the soil fabric structure, the grain contact stiffness, frictional strength, and inter-particle contact area. Previous studies have focused on the variations of shear and compressional wave velocities with effective stress and void ratio, and lesser effort has been made in understanding the variation of amplitude and dominant frequency of transmitted compressional waves with deformation of soil packing. In this study, continuous compressional wave transmission measurements during compaction of unconsolidated quartz sand are used to investigate the impact of soil layer deformation on ultrasonic wave properties. The test setup consisted of a loading machine to apply constant loading rate to a sand layer (granular quartz) of 6 mm thickness compressed between two forcing blocks, and an ultrasonic wave measurement system to continuously monitor the soil layer during compression up to 48 MPa normal stress. The variations in compressional wave attributes such as wave velocity, transmitted amplitude, and dominant frequency were studied as a function of the applied normal stress and the measured normal strain as well as void ratio and particle size. An increasing trend was observed for P-wave velocity, transmitted amplitude and dominant frequency with normal stress. In specimen with the largest particle size (D 50 = 0.32 mm), the wave velocity, amplitude and dominant frequency were found to increase about 230%, 4700% and 320% as the normal stress reached the value of 48 MPa. The absolute values of transmitted wave amplitude and dominant frequency were greater for specimens with smaller particle sizes while the normalized values indicate an opposite trend. The changes in the transmitted amplitude were linked to the changes in the true contact area between the particles with a transitional point in the slope of normalized amplitude, coinciding with the yield stress of the granular soil layer. The amount of grain crushing as a result of increase in the normal stress was experimentally measured and a linear correlation was found between the degree of grain crushing and the changes in the normalized dominant frequency of compressional waves. Copyright © 2018 Elsevier B.V. All rights reserved.
Normalized difference vegetation index (NDVI) variation among cultivars and environments
USDA-ARS?s Scientific Manuscript database
Although Nitrogen (N) is an essential nutrient for crop production, large preplant applications of fertilizer N can result in off-field loss that causes environmental concerns. Canopy reflectance is being investigated for use in variable rate (VR) N management. Normalized difference vegetation index...
New methods of MR image intensity standardization via generalized scale
NASA Astrophysics Data System (ADS)
Madabhushi, Anant; Udupa, Jayaram K.
2005-04-01
Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.
Fish gelatin thin film standards for biological application of PIXE
NASA Astrophysics Data System (ADS)
Manuel, Jack E.; Rout, Bibhudutta; Szilasi, Szabolcs Z.; Bohara, Gyanendra; Deaton, James; Luyombya, Henry; Briski, Karen P.; Glass, Gary A.
2014-08-01
There exists a critical need to understand the flow and accumulation of metallic ions, both naturally occurring and those introduced to biological systems. In this paper the results of fabricating thin film elemental biological standards containing nearly any combination of trace elements in a protein matrix are presented. Because it is capable of high elemental sensitivity, particle induced X-ray emission spectrometry (PIXE) is an excellent candidate for in situ analysis of biological tissues. Additionally, the utilization of microbeam PIXE allows the determination of elemental concentrations in and around biological cells. However, obtaining elemental reference standards with the same matrix constituents as brain tissue is difficult. An excellent choice for simulating brain-like tissue is Norland® photoengraving glue which is derived from fish skin. Fish glue is water soluble, liquid at room temperature, and resistant to dilute acid. It can also be formed into a thin membrane which dries into a durable, self-supporting film. Elements of interest are introduced to the fish glue in precise volumetric additions of well quantified atomic absorption standard solutions. In this study GeoPIXE analysis package is used to quantify elements intrinsic to the fish glue as well as trace amounts of manganese added to the sample. Elastic (non-Rutherford) backscattered spectroscopy (EBS) and the 1.734 MeV proton-on-carbon 12C(p,p)12C resonance is used for a normalization scheme of the PIXE spectra to account for any discrepancies in X-ray production arising from thickness variation of the prepared standards. It is demonstrated that greater additions of the atomic absorption standard cause a viscosity reduction of the liquid fish glue resulting in thinner films but the film thickness can be monitored by using simultaneous PIXE and EBS proton data acquisition.
Paleosecular Variation of Plio-Pleistocene Lavas from the Loiyangalani Region of Kenya
NASA Astrophysics Data System (ADS)
Opdyke, N. D.; Kent, D. V.; Huang, K.; Foster, D.; Patel, J.
2008-12-01
The data reported here is part of a study of Pliocene-Pleistocene lavas in Kenya to document the paleosecular variation and time-averaged geomagnetic field direction near to the Equator. We sampled 32 sites (10 oriented cores each) in lavas to the south and the northeast of Loiyangalani that are mapped and dated as Plio-Pleistocene in age (less than ~5 Ma) and associated with Mt. Kulal and the Longipi eruption centers. The samples from this collection were returned to the US, sliced into samples and progressively demagnetized using alternating field demagnetization. The Loiyangalani sites yielded excellent results and are seemingly unaffected by lightning, which seems to be infrequent at this latitude, in this arid environment; all but one site gave acceptable data with an alpha95 of 10° or less. There are 17 reverse sites (Dec = 183.4°, Inc = 0.9°, alpha95 = 6.7°) and 15 normal sites (Dec = 358.4°, Inc = -1.2°, alpha95 = 4.7°). The reversal test is positive suggesting that the normal and reverse polarity populations both represent a reasonable time average. The site means were combined yielding an overall mean direction of Dec = 1.1°, Inc = -1.1°, alpha95 = 4.1°. The inclination is shallower than expected for a geocentric axial dipole field (delta I = -6°); accordingly, the site VGPs give a mean pole position at Lon = 205.1° E, Lat = 86.8° N, Alpha95 = 3°, which is significantly far-sided with respect to the geographic axis. The angular standard deviation of the VGPs is 9.3°, which is a relatively low angular dispersion compared to most PSVL models such as Model G.
Face landmark point tracking using LK pyramid optical flow
NASA Astrophysics Data System (ADS)
Zhang, Gang; Tang, Sikan; Li, Jiaquan
2018-04-01
LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.
ERIC Educational Resources Information Center
Hilligoss, Phillip Brian
2011-01-01
This dissertation is motivated by two problems. First, existing literature characterizes patient handoff as an information transfer activity in which safety and quality are compromised by practice variation. This has prompted a movement to standardize practice. However, existing research has not closely examined how practice variations may be…
NYX mutations in four families with high myopia with or without CSNB1
Zhou, Lin; Song, Xiusheng; Li, Yin; Li, Hongyan; Dan, Handong
2015-01-01
Purpose Mutations in the NYX gene are known to cause complete congenital stationary night blindness (CSNB1), which is always accompanied by high myopia. In this study, we aimed to investigate the association between NYX mutations and high myopia with or without CSNB1. Methods Four Chinese families having high myopia with or without CSNB1 and 96 normal controls were recruited. We searched for mutations in the NYX gene using Sanger sequencing. Further analyses of the detected variations in the available family members were performed, and the frequencies of the detected variations in 96 normal controls were determined to verify our deduction. The effect of each variation on the nyctalopin protein was predicted using online tools. Results Four potential pathogenic variations in the NYX gene were found in four families with high myopia with or without CSNB1. Three of the four variants were novel (c.626G>C; c.121delG; c.335T>C). The previously identified variant, c.529_530delGCinsAT, was found in an isolated highly myopic patient and an affected brother, but the other affected brother did not carry the same variation. Further linkage analyses of this family showed a coinheritance of markers at MYP1. These four mutations were not identified in the 96 normal controls. Conclusions Our study expands the mutation spectrum of NYX for cases of high myopia with CSNB1; however, more evidence is needed to elucidate the pathogenic effects of NYX on isolated high myopia. PMID:25802485
Jaremko, Jacob L; Mabee, Myles; Swami, Vimarsha G; Jamieson, Lucy; Chow, Kelvin; Thompson, Richard B
2014-12-01
To use three-dimensional ( 3D three-dimensional ) ultrasonography (US) to quantify the alpha-angle variability due to changing probe orientation during two-dimensional ( 2D two-dimensional ) US of the infant hip and its effect on the diagnostic classification of developmental dysplasia of the hip ( DDH developmental dysplasia of the hip ). In this institutional research ethics board-approved prospective study, with parental written informed consent, 13-MHz 3D three-dimensional US was added to initial 2D two-dimensional US for 56 hips in 35 infants (mean age, 41.7 days; range, 4-112 days), 26 of whom were female (mean age, 38.7 days; range, 6-112 days) and nine of whom were male (mean age, 50.2 days; range, 4-111 days). Findings in 20 hips were normal at the initial visit and were initially inconclusive but normalized spontaneously at follow-up in 23 hips; 13 hips were treated for dysplasia. With the computer algorithm, 3D three-dimensional US data were resectioned in planes tilted in 5° increments away from a central plane, as if slowly rotating a 2D two-dimensional US probe, until resulting images no longer met Graf quality criteria. On each acceptable 2D two-dimensional image, two observers measured alpha angles, and descriptive statistics, including mean, standard deviation, and limits of agreement, were computed. Acceptable 2D two-dimensional images were produced over a range of probe orientations averaging 24° (maximum, 45°) from the central plane. Over this range, alpha-angle variation was 19° (upper limit of agreement), leading to alteration of the diagnostic category of hip dysplasia in 54% of hips scanned. Use of 3D three-dimensional US showed that alpha angles measured at routine 2D two-dimensional US of the hip can vary substantially between 2D two-dimensional scans solely because of changes in probe positioning. Not only could normal hips appear dysplastic, but dysplastic hips also could have normal alpha angles. Three-dimensional US can display the full acetabular shape, which might improve DDH developmental dysplasia of the hip assessment accuracy. © RSNA, 2014.
Variation-preserving normalization unveils blind spots in gene expression profiling
Roca, Carlos P.; Gomes, Susana I. L.; Amorim, Mónica J. B.; Scott-Fordsmand, Janeck J.
2017-01-01
RNA-Seq and gene expression microarrays provide comprehensive profiles of gene activity, but lack of reproducibility has hindered their application. A key challenge in the data analysis is the normalization of gene expression levels, which is currently performed following the implicit assumption that most genes are not differentially expressed. Here, we present a mathematical approach to normalization that makes no assumption of this sort. We have found that variation in gene expression is much larger than currently believed, and that it can be measured with available assays. Our results also explain, at least partially, the reproducibility problems encountered in transcriptomics studies. We expect that this improvement in detection will help efforts to realize the full potential of gene expression profiling, especially in analyses of cellular processes involving complex modulations of gene expression. PMID:28276435
NASA Technical Reports Server (NTRS)
Kogan, M. N.
1994-01-01
Recent progress in both the linear and nonlinear aspects of stability theory has highlighted the importance of the receptivity problem. One of the most unclear aspects of receptivity study is the receptivity of boundary-layer flow normal to vortical disturbances. Some experimental and theoretical results permit the proposition that quasi-steady outer-flow vortical disturbances may trigger by-pass transition. In present work such interaction is investigated for vorticity normal to a leading edge. The interest in these types of vortical disturbances arise from theoretical work, where it was shown that small sinusoidal variations of upstream velocity along the spanwise direction can produce significant variations in the boundary-layer profile. In the experimental part of this work, such non-uniform flow was created and the laminar-turbulent transition in this flow was investigated. The experiment was carried out in a low-turbulence direct-flow wind tunnel T-361 at the Central Aerohydrodynamic Institute (TsAGI). The non-uniform flow was produced by laminar or turbulent wakes behind a wire placed normal to the plate upstream of the leading edge. The theoretical part of the work is devoted to studying the unstable disturbance evolution in a boundary layer with strongly non-uniform velocity profiles similar to that produced by outer-flow vorticity. Specifically, the Tollmien-Schlichting wave development in the boundary layer flow with spanwise variations of velocity is investigated.
Liao, Hong-kai; Long, Jian
2011-09-01
This paper studied the variation characteristics of soil organic carbon (SOC) and different particle sizes soil particulate organic carbon (POC) in normal soil and in micro-habitats under different vegetation types in typical Karst mountain areas of southwest Guizhou. Under different vegetation types, the SOC content in normal soil and in micro-habitats was all in the order of bare land < grass < shrub < forest, with the variation range being 7.18-43.42 g x kg(-1) in normal soil and being 6.62-46.47 g x kg(-1) and 9.01-52.07 g x kg(-1) in earth surface and stone pit, respectively. The POC/MOC (mineral-associated organic carbon) ratio under different vegetation types was in the order of bare land < grass < forest < shrub. Under the same vegetation types, the POC/MOC in stone pit was the highest, as compared to that in normal soil and in earth surface. In the process of bare land-grass-shrub-forest, the contents of different particle sizes soil POC increased, while the SOC mainly existed in the forms of sand- and silt organic carbon, indicating that in Karst region, soil carbon sequestration and SOC stability were weak, soil was easily subjected to outside interference and led to organic carbon running off, and thus, soil quality had the risk of decline or degradation.
Camats, Núria; Fernández-Cancio, Mónica; Carrascosa, Antonio; Andaluz, Pilar; Albisu, M Ángeles; Clemente, María; Gussinyé, Miquel; Yeste, Diego; Audí, Laura
2012-10-01
Molecular causes of isolated severe growth hormone deficiency (ISGHD) in several genes have been established. The aim of this study was to analyse the contribution of growth hormone-releasing hormone receptor (GHRHR) gene sequence variation to GH deficiency in a series of prepubertal ISGHD patients and to normal adult height. A systematic GHRHR gene sequence analysis was performed in 69 ISGHD patients and 60 normal adult height controls (NAHC). Four GHRHR single-nucleotide polymorphisms (SNPs) were genotyped in 248 additional NAHC. An analysis was performed on individual SNPs and combined genotype associations with diagnosis in ISGHD patients and with height-SDS in NAHC. Twenty-one SNPs were found. P3, P13, P15 and P20 had not been previously described. Patients and controls shared 12 SNPs (P1, P2, P4-P11, P16 and P21). Significantly different frequencies of the heterozygous genotype and alternate allele were detected in P9 (exon 4, rs4988498) and P12 (intron 6, rs35609199); P9 heterozygous genotype frequencies were similar in patients and the shortest control group (heights between -2 and -1 SDS) and significantly different in controls (heights between -1 and +2 SDS). GHRHR P9 together with 4 GH1 SNP genotypes contributed to 6·2% of height-SDS variation in the entire 308 NAHC. This study established the GHRHR gene sequence variation map in ISGHD patients and NAHC. No evidence of GHRHR mutation contribution to ISGHD was found in this population, although P9 and P12 SNP frequencies were significantly different between ISGHD and NAHC. Thus, the gene sequence may contribute to normal adult height, as demonstrated in NAHC. © 2012 Blackwell Publishing Ltd.
Holdeman, L V; Good, I J; Moore, W E
1976-01-01
Data are presented on the distribution of 101 bacterial species and subspecies among 1,442 isolates from 25 fecal specimens from three men on: (i) their normal diet and normal living conditions, (ii) normal living conditions but eating the controlled metabolic diet designed for use in the Skylab simulation and missions, and (iii) the Skylab diet in simulated Skylab (isolation) conditions. These bacteria represent the most numerous kinds in the fecal flora. Analyses of the kinds of bacteria from each astronaut during the 5-month period showed more variation in the composition of the flora among the individual astronauts than among the eight or nine samples from each person. This observation indicates that the variations in fecal flora reported previously, but based on the study of only one specimen from each person, more certainly reflect real differences (and not daily variation) in the types of bacteria maintained by individual people. The proportions of the predominant fecal species in the astronauts were similar to those reported earlier from a Japanese-Hawaiian population and were generally insensitive to changes from the normal North American diet to the Skylab diet; only two of the most common species were affected by changes in diet. However, one of the predominant species (Bacteroides fragilis subsp. thetaiotaomicron) appeared to be affected during confinement of the men in the Skylab test chamber. Evidence is presented suggesting that an anger stress situation may have been responsible for the increase of this species simultaneously in all of the subjects studied. Phenotypic characteristics of some of the less common isolates are given. The statistical analyses used in interpretation of the results are discussed. PMID:938032
An enzyme-linked immuno-mass spectrometric assay with the substrate adenosine monophosphate.
Florentinus-Mefailoski, Angelique; Soosaipillai, Antonius; Dufresne, Jaimie; Diamandis, Eleftherios P; Marshall, John G
2015-02-01
An enzyme-linked immuno-mass spectrometric assay (ELIMSA) with the specific detection probe streptavidin conjugated to alkaline phosphatase catalyzed the production of adenosine from the substrate adenosine monophosphate (AMP) for sensitive quantification of prostate-specific antigen (PSA) by mass spectrometry. Adenosine ionized efficiently and was measured to the femtomole range by dilution and direct analysis with micro-liquid chromatography, electrospray ionization, and mass spectrometry (LC-ESI-MS). The LC-ESI-MS assay for adenosine production was shown to be linear and accurate using internal (13)C(15)N adenosine isotope dilution, internal (13)C(15)N adenosine one-point calibration, and external adenosine standard curves with close agreement. The detection limits of LC-ESI-MS for alkaline phosphatase-streptavidin (AP-SA, ∼190,000 Da) was tested by injecting 0.1 μl of a 1 pg/ml solution, i.e., 100 attograms or 526 yoctomole (5.26E-22) of the alkaline-phosphatase labeled probe on column (about 315 AP-SA molecules). The ELIMSA for PSA was linear and showed strong signals across the picogram per milliliter range and could robustly detect PSA from all of the prostatectomy patients and all of the female plasma samples that ranged as low as 70 pg/ml with strong signals well separated from the background and well within the limit of quantification of the AP-SA probe. The results of the ELIMSA assay for PSA are normal and homogenous when independently replicated with a fresh standard over multiple days, and intra and inter diem assay variation was less than 10 % of the mean. In a blind comparison, ELIMSA showed excellent agreement with, but was more sensitive than, the present gold standard commercial fluorescent ELISA, or ECL-based detection, of PSA from normal and prostatectomy samples, respectively.
X-ray clusters from a high-resolution hydrodynamic PPM simulation of the cold dark matter universe
NASA Technical Reports Server (NTRS)
Bryan, Greg L.; Cen, Renyue; Norman, Michael L.; Ostriker, Jermemiah P.; Stone, James M.
1994-01-01
A new three-dimensional hydrodynamic code based on the piecewise parabolic method (PPM) is utilized to compute the distribution of hot gas in the standard Cosmic Background Explorer (COBE)-normalized cold dark matter (CDM) universe. Utilizing periodic boundary conditions, a box with size 85 h(exp-1) Mpc, having cell size 0.31 h(exp-1) Mpc, is followed in a simulation with 270(exp 3)=10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, Sigma(sub 8)=1.05, Omega(sub b)=0.06, we find the X-ray-emitting clusters, compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. The results, which are compared with those obtained in the preceding paper (Kang et al. 1994a), may be used in conjuction with ROSAT and other observational data sets. Overall, the results of the two computations are qualitatively very similar with regard to the trends of cluster properties, i.e., how the number density, radius, and temeprature depend on luminosity and redshift. The total luminosity from clusters is approximately a factor of 2 higher using the PPM code (as compared to the 'total variation diminishing' (TVD) code used in the previous paper) with the number of bright clusters higher by a similar factor. The primary conclusions of the prior paper, with regard to the power spectrum of the primeval density perturbations, are strengthened: the standard CDM model, normalized to the COBE microwave detection, predicts too many bright X-ray emitting clusters, by a factor probably in excess of 5. The comparison between observations and theoretical predictions for the evolution of cluster properties, luminosity functions, and size and temperature distributions should provide an important discriminator among competing scenarios for the development of structure in the universe.
Halogenated Peptides as Internal Standards (H-PINS)
Mirzaei, Hamid; Brusniak, Mi-Youn; Mueller, Lukas N.; Letarte, Simon; Watts, Julian D.; Aebersold, Ruedi
2009-01-01
As the application for quantitative proteomics in the life sciences has grown in recent years, so has the need for more robust and generally applicable methods for quality control and calibration. The reliability of quantitative proteomics is tightly linked to the reproducibility and stability of the analytical platforms, which are typically multicomponent (e.g. sample preparation, multistep separations, and mass spectrometry) with individual components contributing unequally to the overall system reproducibility. Variations in quantitative accuracy are thus inevitable, and quality control and calibration become essential for the assessment of the quality of the analyses themselves. Toward this end, the use of internal standards cannot only assist in the detection and removal of outlier data acquired by an irreproducible system (quality control) but can also be used for detection of changes in instruments for their subsequent performance and calibration. Here we introduce a set of halogenated peptides as internal standards. The peptides are custom designed to have properties suitable for various quality control assessments, data calibration, and normalization processes. The unique isotope distribution of halogenated peptides makes their mass spectral detection easy and unambiguous when spiked into complex peptide mixtures. In addition, they were designed to elute sequentially over an entire aqueous to organic LC gradient and to have m/z values within the commonly scanned mass range (300–1800 Da). In a series of experiments in which these peptides were spiked into an enriched N-glycosite peptide fraction (i.e. from formerly N-glycosylated intact proteins in their deglycosylated form) isolated from human plasma, we show the utility and performance of these halogenated peptides for sample preparation and LC injection quality control as well as for retention time and mass calibration. Further use of the peptides for signal intensity normalization and retention time synchronization for selected reaction monitoring experiments is also demonstrated. PMID:19411281
Analysis of Fe V and Ni V Wavelength Standards in the Vacuum Ultraviolet
NASA Astrophysics Data System (ADS)
Ward, Jacob Wolfgang; Nave, Gillian
2015-01-01
The recent publication[1] by J.C. Berengut et al. tests for a potential variation in the fine-structure constant in the presence of high gravitational potentials through spectral analysis of white-dwarf stars.The spectrum of the white-dwarf star studied in the paper, G191-B2B, has prominent Fe V and Ni V lines, which were used to determine any variation in the fine-structure constant via observed shifts in the wavelengths of Fe V and Ni V in the vacuum ultraviolet region. The results of the paper indicate no such variation, but suggest that refined laboratory values for the observed wavelengths could greatly reduce the uncertainty associated with the paper's findings.An investigation of Fe V and Ni V spectra in the vacuum ultraviolet region has been conducted to reduce wavelength uncertainties currently limiting modern astrophysical studies of this nature. The analyzed spectra were produced by a sliding spark light source with electrodes made of invar, an iron nickel alloy, at peak currents of 750-2000 A. The use of invar ensures that systematic errors in the calibration are common to both species. The spectra were recorded with the NIST Normal Incidence Vacuum Spectrograph on phosphor image plate and photographic plate detectors. Calibration was done with a Pt II spectrum produced by a Platinum Neon Hollow Cathode lamp.[1] J. C. Berengut, V. V. Flambaum, A. Ong, et al Phys. Rev. Lett. 111, 010801 (2013)
Struys, E A; Jansen, E E W; Gibson, K M; Jakobs, C
2005-01-01
Succinic semialdehyde (SSA) accumulates in the inborn error of meta- bolism succinic semialdehyde dehydrogenase deficiency owing to impaired enzymatic conversion to succinic acid. We developed a stable-isotope dilution liquid chromato- graphy-tandem mass spectrometry method for the determination of SSA in urine and cerebrospinal fluid samples. Stable-isotope-labelled [13C4]SSA, serving as internal standard, was prepared by reaction of ninhydrin with L-[13C5]glutamic acid. SSA in body fluids was converted to its dinitrophenylhydrazine (DNPH) derivative, without sample purification prior to the derivatization procedure. The DNPH derivative of SSA was injected onto a C18 analytical column and chromatography was performed by isocratic elution. Detection was accomplished by tandem mass spectrometry operating in the negative multiple-reaction monitoring mode. The limit of detection was 10 nmol/L and the calibration curves over the range 0-500 pmol of SSA showed good linearity (r2 > 0.99). The intra-day coefficient of variation (n = 10) for urine was 2.7% and inter-day coefficient of variation (n = 5) for urine was 8.5%. The average recoveries performed on two levels by enriching urine and cerebrospinal fluid samples ranged between 85 and 115%, with coefficients of variation < 8%. The method enabled the first determination of normal values for SSA in urine and pathological values of SSA in urine and cerebrospinal fluid samples derived from patients with succinic semialdehyde dehydrogenase deficiency.
NASA Astrophysics Data System (ADS)
Ren, W. X.; Lin, Y. Q.; Fang, S. E.
2011-11-01
One of the key issues in vibration-based structural health monitoring is to extract the damage-sensitive but environment-insensitive features from sampled dynamic response measurements and to carry out the statistical analysis of these features for structural damage detection. A new damage feature is proposed in this paper by using the system matrices of the forward innovation model based on the covariance-driven stochastic subspace identification of a vibrating system. To overcome the variations of the system matrices, a non-singularity transposition matrix is introduced so that the system matrices are normalized to their standard forms. For reducing the effects of modeling errors, noise and environmental variations on measured structural responses, a statistical pattern recognition paradigm is incorporated into the proposed method. The Mahalanobis and Euclidean distance decision functions of the damage feature vector are adopted by defining a statistics-based damage index. The proposed structural damage detection method is verified against one numerical signal and two numerical beams. It is demonstrated that the proposed statistics-based damage index is sensitive to damage and shows some robustness to the noise and false estimation of the system ranks. The method is capable of locating damage of the beam structures under different types of excitations. The robustness of the proposed damage detection method to the variations in environmental temperature is further validated in a companion paper by a reinforced concrete beam tested in the laboratory and a full-scale arch bridge tested in the field.
On Three-dimensional Structures in Relativistic Hydrodynamic Jets
NASA Astrophysics Data System (ADS)
Hardee, Philip E.
2000-04-01
The appearance of wavelike helical structures on steady relativistic jets is studied using a normal mode analysis of the linearized fluid equations. Helical structures produced by the normal modes scale relative to the resonant (most unstable) wavelength and not with the absolute wavelength. The resonant wavelength of the normal modes can be less than the jet radius even on highly relativistic jets. High-pressure regions helically twisted around the jet beam may be confined close to the jet surface, penetrate deeply into the jet interior, or be confined to the jet interior. The high-pressure regions range from thin and ribbon-like to thick and tubelike depending on the mode and wavelength. The wave speeds can be significantly different at different wavelengths but are less than the flow speed. The highest wave speed for the jets studied has a Lorentz factor somewhat more than half that of the underlying flow speed. A maximum pressure fluctuation criterion found through comparison between theory and a set of relativistic axisymmetric jet simulations is applied to estimate the maximum amplitudes of the helical, elliptical, and triangular normal modes. Transverse velocity fluctuations for these asymmetric modes are up to twice the amplitude of those associated with the axisymmetric pinch mode. The maximum amplitude of jet distortions and the accompanying velocity fluctuations at, for example, the resonant wavelength decreases as the Lorentz factor increases. Long-wavelength helical surface mode and shorter wavelength helical first body mode generated structures should be the most significant. Emission from high-pressure regions as they twist around the jet beam can vary significantly as a result of angular variation in the flow direction associated with normal mode structures if they are viewed at about the beaming angle θ=1/γ. Variation in the Doppler boost factor can lead to brightness asymmetries by factors up to 6 as long-wavelength helical structure produced by the helical surface mode winds around the jet. Higher order surface modes and first body modes produce less variation. Angular variation in the flow direction associated with the helical mode appears consistent with precessing jet models that have been proposed to explain the variability in 3C 273 and BL Lac object AO 0235+164. In particular, cyclic angular variation in the flow direction produced by the normal modes could produce the activity seen in BL Lac object OJ 287. Jet precession provides a mechanism for triggering the helical modes on multiple length scales, e.g., the galactic superluminal GRO J1655-40.
Tarroun, Abdullah; Bonnefoy, Marc; Bouffard-Vercelli, Juliette; Gedeon, Claire; Vallee, Bernard; Cotton, François
2007-02-01
Although mild progressive specific structural brain changes are commonly associated with normal human aging, it is unclear whether automatic or manual measurements of these structures can differentiate normal brain aging in elderly persons from patients suffering from cognitive impairment. The objective of this study was primarily to define, with a standard high resolution MRI, the range of normal linear age-specific values for the hippocampal formation (HF), and secondarily to differentiate hippocampal atrophy in normal aging from that occurring in Alzheimer disease (AD). Two MRI-based linear measurements of the hippocampal formation at the level of the head and of the tail, standardized by the cranial dimensions, were obtained from coronal and sagittal T1-weighted MR images in 25 normal elderly subjects, and 26 patients with AD. In this study, dimensions of the HF have been standardized and they revealed normal distributions for each side and each sex: the width of the hippocampal head at the level of the amygdala was 16.42 +/- 1.9 mm, and its height 7.93 +/- 1.4 mm; the width of the tail at the level of the cerebral aqueduct was 8.54 +/- 1.2 mm, and the height 5.74 +/- 0.4 mm. There were no significant differences in standardized dimensions of the HF between sides, sexes, or in comparison to head dimensions in the two groups. In addition, the median inter-observer agreement index was 93%. In contrast, the dimensions of the hippocampal formation decreased gradually with increasing age, owing to physiological atrophy, but this atrophy is more significant in the group of AD.
Understanding Emotions from Standardized Facial Expressions in Autism and Normal Development
ERIC Educational Resources Information Center
Castelli, Fulvia
2005-01-01
The study investigated the recognition of standardized facial expressions of emotion (anger, fear, disgust, happiness, sadness, surprise) at a perceptual level (experiment 1) and at a semantic level (experiments 2 and 3) in children with autism (N= 20) and normally developing children (N= 20). Results revealed that children with autism were as…
Dimitrova, Irina K.; Richer, Jennifer K.; Rudolph, Michael C.; Spoelstra, Nicole S.; Reno, Elaine M.; Medina, Theresa M.; Bradford, Andrew P.
2009-01-01
Objective To identify differentially expressed genes between fibroid and adjacent normal myometrium in an identical hormonal and genetic background. Design Array analysis of 3 leiomyomata and matched adjacent normal myometrium in a single patient. Setting University of Colorado Hospital. Patient(s) A single female undergoing medically indicated hysterectomy for symptomatic fibroids. Interventions(s) mRNA isolation and microarray analysis, reverse-transcriptase polymerase chain reaction, western blotting and immunohistochemistry. Main Outcome Measure(s) Changes in mRNA and protein levels in leiomyomata and matched normal myometrium. Result(s) Expression of 197 genes was increased and 619 decreased, significantly by at least 2 fold, in leiomyomata relative to normal myometrium. Expression profiles between tumors were similar and normal myometrial samples showed minimal variation. Changes in, and variation of, expression of selected genes were confirmed in additional normal and leiomyoma samples from multiple patients. Conclusion(s) Analysis of multiple tumors from a single patient confirmed changes in expression of genes described in previous, apparently disparate, studies and identified novel targets. Gene expression profiles in leiomyomata are consistent with increased activation of mitogenic pathways and inhibition of apoptosis. Down-regulation of genes implicated in invasion and metastasis, of cancers, was observed in fibroids. This expression pattern may underlie the benign nature of uterine leiomyomata and may aid in the differential diagnosis of leiomyosarcoma. PMID:18672237
The normal-equivalent: a patient-specific assessment of facial harmony.
Claes, P; Walters, M; Gillett, D; Vandermeulen, D; Clement, J G; Suetens, P
2013-09-01
Evidence-based practice in oral and maxillofacial surgery would greatly benefit from an objective assessment of facial harmony or gestalt. Normal reference faces have previously been introduced, but they describe harmony in facial form as an average only and fail to report on harmonic variations found between non-dysmorphic faces. In this work, facial harmony, in all its complexity, is defined using a face-space, which describes all possible variations within a non-dysmorphic population; this was sampled here, based on 400 healthy subjects. Subsequently, dysmorphometrics, which involves the measurement of morphological abnormalities, is employed to construct the normal-equivalent within the given face-space of a presented dysmorphic face. The normal-equivalent can be seen as a synthetic identical but unaffected twin that is a patient-specific and population-based normal. It is used to extract objective scores of facial discordancy. This technique, along with a comparing approach, was used on healthy subjects to establish ranges of discordancy that are accepted to be normal, as well as on two patient examples before and after surgical intervention. The specificity of the presented normal-equivalent approach was confirmed by correctly attributing abnormality and providing regional depictions of the known dysmorphologies. Furthermore, it proved to be superior to the comparing approach. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Realized Volatility Analysis in A Spin Model of Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
We calculate the realized volatility of returns in the spin model of financial markets and examine the returns standardized by the realized volatility. We find that moments of the standardized returns agree with the theoretical values of standard normal variables. This is the first evidence that the return distributions of the spin financial markets are consistent with a finite-variance of mixture of normal distributions that is also observed empirically in real financial markets.
NASA Astrophysics Data System (ADS)
Karolina, Rahmi; Panatap Simanjuntak, Murydrischy
2018-03-01
Self Compacting Concrete (SCC) is a technology which is developing today in which concrete solidifies by itself without using vibrator. Casting conventional concrete which has a lot of reinforcement bars sometimes finds difficulty in achieving optimal solidity. The method used to solve this problem is by using SCC technology. SCC was made by using filler, volcanic ash, and lime ash as the filling materials so that the concrete became more solid and hollow space could be filled up. The variation of using these two materials was 10%, 15%, 20%, and 25% of the cementitious mass and using 1% of superplasticizer from cementitious material. The supporting testing was done by using the test when the concrete was still fluid and when it was solid. Malleable concrete was tested by using EFNARC 2002 standard in slump flow test, v-funnel test, l-shaped box test, and j-ring test to obtain filling ability and passing ability. In this malleable lime concrete test, there was the decrease, compared with normal SCC concrete without adding volcanic ash and lime ash. Testing was also done in solid concrete in compressive strength, tensile strength, and concrete absorption. The result of the testing showed that the optimum tensile strength in Variation 1, without volcanic ash and lime ash – with 1% of superplasticizer was 39.556 MPa, the optimum tensile strength in Variation 1, without volcanic ash and lime ash- with 1% of super-plasticizer was 3.563 MPa, while the value of optimum absorption which occurred in Variation 5 (25% of volcanic ash + 25% of lime ash + 50% of cement + 1% of superplasticizer) was 1.313%. This was caused by the addition of volcanic ash and lime ash which had high water absorption.
Birla, S; Khadgawat, R; Jyotsna, V P; Jain, V; Garg, M K; Bhalla, A S; Sharma, A
2016-12-01
Growth hormone deficiency (GHD) results from variations affecting the production and release of growth hormone (GH) and is of 2 types: isolated growth hormone deficiency (IGHD) and combined pituitary hormone deficiency (CPHD). IGHD results from mutations in GH1 and GHRHR while CPHD is associated with defects in transcription factor genes PROP1 , POU1F1 , and HESX1. The present study reports on screening of POU1F1 , PROP1 , and HESX1 in CPHD patients and the novel variations identified. Fifty-one CPHD patients from 49 unrelated families clinically diagnosed on the basis of biochemical and imaging investigations along with 100 controls were enrolled. Detailed family history was noted from all participants and 5 ml blood samples drawn were processed for DNA isolation followed by direct sequencing of POU1F1 , PROP1 , and HESX1 genes. Of the 51 patients, 8 were females and 43 were males. Mean height standard deviation score (SDS) and weight SDS were -5.50 and -2.76, respectively. Thirty-six of the 51 patients underwent MRI of which 9 (25%) had normal pituitary structure and morphology while 27 (75%) showed abnormalities. Molecular analysis revealed 10 (20%) patients to have POU1F1 and PROP1 mutations/variations of which 5 were novel and 2 previously reported. No mutations were identified in HESX1. The novel variations identified were absent in the 100 healthy individuals screened and the control database Exome Aggregation Consortium (ExAC). Reported POU1F1 and PROP1 mutation hotspots were absent in our patients. Instead, novel POU1F1 changes were identified suggesting existence of a distinct mutation spectrum in our population. © Georg Thieme Verlag KG Stuttgart · New York.
Precision Composite Space Structures
2007-10-15
large structures. 15. SUBJECT TERMS Composite materials, dimensional stability, microcracking, thermal expansion , space structures, degradation...Figure 32. Variation of normalized coefficients of thermal expansion α11(n), α22(n), and α33(n) with normalized crack density of an AS4/3501-6...coefficients of thermal expansion α11(n), α22(n), and α33(n) with normalized crack density of an AS4/3501-6 composite lamina with a fiber volume
Bilateral bifid mandibular canal
Sheikhi, Mahnaz; Badrian, Hamid; Ghorbanizadeh, Sajad
2012-01-01
One of the normal interesting variations that we may encounter in the mandible is bifid mandibular canal. This condition can lead to difficulties when performing mandibular anesthesia or during extraction of lower third molar, placement of implants, and surgery in the mandible. Therefore diagnosis of this variation is sometimes very important and necessary. PMID:23814555
Gasdynamic Inlet Isolation in Rotating Detonation Engine
2010-12-01
2D Total Variation Diminishing (TVD): Continuous Riemann Solver Minimum Dissipation: LHS & RHS Activate pressure switch : Supersonic Activate...Total Variation Diminishing (TVD) limiter: Continuous Riemann Solver Minimum Dissipation: LHS & RHS Activate pressure switch : Supersonic Activate...Continuous 94 Riemann Solver Minimum Dissipation: LHS & RHS Activate pressure switch : Supersonic Activate pressure gradient switch: Normal
INTERINDIVIDUAL VARIATION IN THE METABOLISM OF ARSENIC IN HUMAN HEPATOCYTES
The liver is the major site for the enzymatic methylation of inorganic arsenic (iAs) in humans. Primary cultures of normal human hepatocytes isolated from tissue obtained at surgery or from donor livers have been used to study interindividual variation in the capacity of live...
Townsend, Janice A; Brannon, Robert B; Cheramie, Toby; Hagan, Joseph
2013-01-01
The median maxillary labial frenum (MMLF) is a normal anatomic structure with inherent morphologic variations. This study sought to evaluate the prevalence of those variations in a diverse ethnic population and to educate practitioners about the prevalence of MMLF variations to prevent unnecessary biopsies. This study included adult, adolescent, and child patients at the Louisiana State University Health Science Center School of Dentistry. Among the 284 subjects examined, frenum normale was the most common frenum classification, followed by frenum with nodule and frenum with appendix. Most nodules were found in the intermediate third of the MMLF, while appendices were mainly found in the labial third. The prevalence of an appendix was significantly higher (P < 0.001) in Caucasians compared to African-Americans. The prevalence of nodules was marginally higher (P = 0.096) in Caucasians than in African-Americans. No other statistically significant differences were found with regard to ethnicity. Additionally, nodules and appendices on the MMLF were identified in all age groups, and may become more common with increasing age. The authors determined that variations of the MMLF are inherent and do not represent a pathologic condition, nor do they require biopsy for diagnostic purposes.
Hope, A.S.; Boynton, W.L.; Stow, D.A.; Douglas, David C.
2003-01-01
Interannual above-ground production patterns are characterized for three tundra ecosystems in the Kuparuk River watershed of Alaska using NOAA-AVHRR Normalized Difference Vegetation Index (NDVI) data. NDVI values integrated over each growing season (SINDVI) were used to represent seasonal production patterns between 1989 and 1996. Spatial differences in ecosystem production were expected to follow north-south climatic and soil gradients, while interannual differences in production were expected to vary with variations in seasonal precipitation and temperature. It was hypothesized that the increased vegetation growth in high latitudes between 1981 and 1991 previously reported would continue through the period of investigation for the study watershed. Zonal differences in vegetation production were confirmed but interannual variations did not covary with seasonal precipitation or temperature totals. A sharp reduction in the SINDVI in 1992 followed by a consistent increase up to 1996 led to a further hypothesis that the interannual variations in SINDVI were associated with variations in stratospheric optical depth. Using published stratospheric optical depth values derived from the SAGE and SAGE-II satellites, it is demonstrated that variations in these depths are likely the primary cause of SINDVI interannual variability.
Normal Perceptual Sensitivity Arising From Weakly Reflective Cone Photoreceptors
Bruce, Kady S.; Harmening, Wolf M.; Langston, Bradley R.; Tuten, William S.; Roorda, Austin; Sincich, Lawrence C.
2015-01-01
Purpose To determine the light sensitivity of poorly reflective cones observed in retinas of normal subjects, and to establish a relationship between cone reflectivity and perceptual threshold. Methods Five subjects (four male, one female) with normal vision were imaged longitudinally (7–26 imaging sessions, representing 82–896 days) using adaptive optics scanning laser ophthalmoscopy (AOSLO) to monitor cone reflectance. Ten cones with unusually low reflectivity, as well as 10 normally reflective cones serving as controls, were targeted for perceptual testing. Cone-sized stimuli were delivered to the targeted cones and luminance increment thresholds were quantified. Thresholds were measured three to five times per session for each cone in the 10 pairs, all located 2.2 to 3.3° from the center of gaze. Results Compared with other cones in the same retinal area, three of 10 monitored dark cones were persistently poorly reflective, while seven occasionally manifested normal reflectance. Tested psychophysically, all 10 dark cones had thresholds comparable with those from normally reflecting cones measured concurrently (P = 0.49). The variation observed in dark cone thresholds also matched the wide variation seen in a large population (n = 56 cone pairs, six subjects) of normal cones; in the latter, no correlation was found between cone reflectivity and threshold (P = 0.0502). Conclusions Low cone reflectance cannot be used as a reliable indicator of cone sensitivity to light in normal retinas. To improve assessment of early retinal pathology, other diagnostic criteria should be employed along with imaging and cone-based microperimetry. PMID:26193919
7 CFR 42.109 - Sampling plans for normal condition of container inspection, Tables I and I-A.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Procedures...
40 CFR 406.50 - Applicability; description of the normal rice milling subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... normal rice milling subcategory. 406.50 Section 406.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Normal Rice Milling Subcategory § 406.50 Applicability; description of the normal rice milling subcategory. The...
ALCOHOL CONTENT VARIATION OF BAR AND RESTAURANT DRINKS IN NORTHERN CALIFORNIA
Kerr, William C.; Patterson, Deidre; Koenen, Mary Albert; Greenfield, Thomas K.
2008-01-01
Objective To estimate the average of, and sources of variation in, the alcohol content of drinks served on-premise in 10 Northern California counties. Methods Focus groups of bartenders were conducted to evaluate potential sources of drink alcohol content variation. In the main study, 80 establishments were visited by a team of research personnel who purchased and measured the volume of particular beer, wine and spirits drinks. Brand or analysis of a sample of the drink was used to determine the alcohol concentration by volume. Results The average wine drink was found to be 43% larger than a standard drink with no difference between red and white wine. The average draught beer was 22% larger than the standard. Spirits drinks differed by type with the average shot being equal to one standard drink while mixed drinks were 42% larger. Variation in alcohol content was particularly wide for wine and mixed spirits drinks. No significant differences in mean drink alcohol content were seen by county for beer or spirits but one county was lower than two others for wine. Conclusions On premise drinks typically contained more alcohol than the standard drink with the exception of shots and bottled beers. Wine and mixed spirits drinks were the largest with nearly 1.5 times the alcohol of a standard drink on average. Consumers should be made aware of these substantial differences and key sources of variation in drink alcohol content and research studies should utilize this information in the interpretation of reported numbers of drinks. PMID:18616674
Betty Petersen Memorial Library - NCWCP Publications - NWS
Filters to Variational Statistical Analysis with Spatially Inhomogeneous Covariances (.PDF file) 432 2001 file) 456 2008 Purser, R. James Normalization Of The Diffusive Filters That Represent The Inhomogeneous file) 457 2008 Purser, R. James Normalization Of The Diffusive Filters That Represent The Inhomogeneous
Enhanced Cortical Connectivity in Absolute Pitch Musicians: A Model for Local Hyperconnectivity
ERIC Educational Resources Information Center
Loui, Psyche; Li, H. Charles; Hohmann, Anja; Schlaug, Gottfried
2011-01-01
Connectivity in the human brain has received increased scientific interest in recent years. Although connection disorders can affect perception, production, learning, and memory, few studies have associated brain connectivity with graded variations in human behavior, especially among normal individuals. One group of normal individuals who possess…
Seasonal changes in carbohydrate levels in roots of sugar maple
Philip M. Wargo; Philip M. Wargo
1971-01-01
This study was done to determine the normal complement of individual carbohydrates present in roots of sugar maples duringthe year and to obtain, as a basis for future comparison, an estimate of the normal variation and range of concentrations of individual carbohydrates in the roots during the year.
NASA Astrophysics Data System (ADS)
Thomas, Siti A.; Empaling, Shirly; Darlis, Nofrizalidris; Osman, Kahar; Dillon, Jeswant; Taib, Ishkrizat; Khudzari, Ahmad Zahran Md
2017-09-01
Aortic cannulation has been the gold standard for maintaining cardiovascular function during open heart surgery while being connected onto the heart lung machine. These cannulation produces high velocity outflow which may lead to adverse effect on patient condition, especially sandblasting effect on aorta wall and blood cells damage. This paper reports a novel design that was able to decrease high velocity outflow. There were three design factors of that was investigated. The design factors consist of the cannula type, the flow rate, and the cannula tip design which result in 12 variations. The cannulae type used were the spiral flow inducing cannula and the standard cannula. The flow rates are varied from three to five litres per minute (lpm). Parameters for each cannula variation included maximum velocity within the aorta, pressure drop, wall shear stress (WSS) area exceeding 15 Pa, and impinging velocity on the aorta wall were evaluated. Based on the result, spiral flow inducing cannulae is proposed as a better alternatives due to its ability to reduce outflow velocity. Meanwhile, the pressure drop of all variations are less than the limit of 100 mmHg, although standard cannulae yielded better result. All cannulae show low reading of wall shear stress which decrease the possibilities for atherogenesis formation. In conclusion, as far as velocity is concerned, spiral flow is better compared to standard flow across all cannulae variations.
Trabado, Séverine; Al-Salameh, Abdallah; Croixmarie, Vincent; Masson, Perrine; Corruble, Emmanuelle; Fève, Bruno; Colle, Romain; Ripoll, Laurent; Walther, Bernard; Boursier-Neyret, Claire; Werner, Erwan; Becquemont, Laurent; Chanson, Philippe
2017-01-01
Metabolomic approaches are increasingly used to identify new disease biomarkers, yet normal values of many plasma metabolites remain poorly defined. The aim of this study was to define the "normal" metabolome in healthy volunteers. We included 800 French volunteers aged between 18 and 86, equally distributed according to sex, free of any medication and considered healthy on the basis of their medical history, clinical examination and standard laboratory tests. We quantified 185 plasma metabolites, including amino acids, biogenic amines, acylcarnitines, phosphatidylcholines, sphingomyelins and hexose, using tandem mass spectrometry with the Biocrates AbsoluteIDQ p180 kit. Principal components analysis was applied to identify the main factors responsible for metabolome variability and orthogonal projection to latent structures analysis was employed to confirm the observed patterns and identify pattern-related metabolites. We established a plasma metabolite reference dataset for 144/185 metabolites. Total blood cholesterol, gender and age were identified as the principal factors explaining metabolome variability. High total blood cholesterol levels were associated with higher plasma sphingomyelins and phosphatidylcholines concentrations. Compared to women, men had higher concentrations of creatinine, branched-chain amino acids and lysophosphatidylcholines, and lower concentrations of sphingomyelins and phosphatidylcholines. Elderly healthy subjects had higher sphingomyelins and phosphatidylcholines plasma levels than young subjects. We established reference human metabolome values in a large and well-defined population of French healthy volunteers. This study provides an essential baseline for defining the "normal" metabolome and its main sources of variation.
Using color histogram normalization for recovering chromatic illumination-changed images.
Pei, S C; Tseng, C L; Wu, C C
2001-11-01
We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.
Friction coefficient and effective interference at the implant-bone interface.
Damm, Niklas B; Morlock, Michael M; Bishop, Nicholas E
2015-09-18
Although the contact pressure increases during implantation of a wedge-shaped implant, friction coefficients tend to be measured under constant contact pressure, as endorsed in standard procedures. Abrasion and plastic deformation of the bone during implantation are rarely reported, although they define the effective interference, by reducing the nominal interference between implant and bone cavity. In this study radial forces were analysed during simulated implantation and explantation of angled porous and polished implant surfaces against trabecular bone specimens, to determine the corresponding friction coefficients. Permanent deformation was also analysed to determine the effective interference after implantation. For the most porous surface tested, the friction coefficient initially increased with increasing normal contact stress during implantation and then decreased at higher contact stresses. For a less porous surface, the friction coefficient increased continually with normal contact stress during implantation but did not reach the peak magnitude measured for the rougher surface. Friction coefficients for the polished surface were independent of normal contact stress and much lower than for the porous surfaces. Friction coefficients were slightly lower for pull-out than for push-in for the porous surfaces but not for the polished surface. The effective interference was as little as 30% of the nominal interference for the porous surfaces. The determined variation in friction coefficient with radial contact force, as well as the loss of interference during implantation will enable a more accurate representation of implant press-fitting for simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.
2012-03-01
Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.
Pan-European comparison of candidate distributions for climatological drought indices, SPI and SPEI
NASA Astrophysics Data System (ADS)
Stagge, James; Tallaksen, Lena; Gudmundsson, Lukas; Van Loon, Anne; Stahl, Kerstin
2013-04-01
Drought indices are vital to objectively quantify and compare drought severity, duration, and extent across regions with varied climatic and hydrologic regimes. The Standardized Precipitation Index (SPI), a well-reviewed meterological drought index recommended by the WMO, and its more recent water balance variant, the Standardized Precipitation-Evapotranspiration Index (SPEI) both rely on selection of univariate probability distributions to normalize the index, allowing for comparisons across climates. The SPI, considered a universal meteorological drought index, measures anomalies in precipitation, whereas the SPEI measures anomalies in climatic water balance (precipitation minus potential evapotranspiration), a more comprehensive measure of water availability that incorporates temperature. Many reviewers recommend use of the gamma (Pearson Type III) distribution for SPI normalization, while developers of the SPEI recommend use of the three parameter log-logistic distribution, based on point observation validation. Before the SPEI can be implemented at the pan-European scale, it is necessary to further validate the index using a range of candidate distributions to determine sensitivity to distribution selection, identify recommended distributions, and highlight those instances where a given distribution may not be valid. This study rigorously compares a suite of candidate probability distributions using WATCH Forcing Data, a global, historical (1958-2001) climate dataset based on ERA40 reanalysis with 0.5 x 0.5 degree resolution and bias-correction based on CRU-TS2.1 observations. Using maximum likelihood estimation, alternative candidate distributions are fit for the SPI and SPEI across the range of European climate zones. When evaluated at this scale, the gamma distribution for the SPI results in negatively skewed values, exaggerating the index severity of extreme dry conditions, while decreasing the index severity of extreme high precipitation. This bias is particularly notable for shorter aggregation periods (1-6 months) during the summer months in southern Europe (below 45° latitude), and can partially be attributed to distribution fitting difficulties in semi-arid regions where monthly precipitation totals cluster near zero. By contrast, the SPEI has potential for avoiding this fitting difficulty because it is not bounded by zero. However, the recommended log-logistic distribution produces index values with less variation than the standard normal distribution. Among the alternative candidate distributions, the best fit distribution and the distribution parameters vary in space and time, suggesting regional commonalities within hydroclimatic regimes, as discussed further in the presentation.
Porosity variations in and around normal fault zones: implications for fault seal and geomechanics
NASA Astrophysics Data System (ADS)
Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra
2015-04-01
Porosity forms the building blocks for permeability, exerts a significant influence on the acoustic response of rocks to elastic waves, and fundamentally influences rock strength. And yet, published studies of porosity around fault zones or in faulted rock are relatively rare, and are hugely dominated by those of fault zone permeability. We present new data from detailed studies of porosity variations around normal faults in sandstone and limestone. We have developed an integrated approach to porosity characterisation in faulted rock exploiting different techniques to understand variations in the data. From systematic samples taken across exposed normal faults in limestone (Malta) and sandstone (Scotland), we combine digital image analysis on thin sections (optical and electron microscopy), core plug analysis (He porosimetry) and mercury injection capillary pressures (MICP). Our sampling includes representative material from undeformed protoliths and fault rocks from the footwall and hanging wall. Fault-related porosity can produce anisotropic permeability with a 'fast' direction parallel to the slip vector in a sandstone-hosted normal fault. Undeformed sandstones in the same unit exhibit maximum permeability in a sub-horizontal direction parallel to lamination in dune-bedded sandstones. Fault-related deformation produces anisotropic pores and pore networks with long axes aligned sub-vertically and this controls the permeability anisotropy, even under confining pressures up to 100 MPa. Fault-related porosity also has interesting consequences for the elastic properties and velocity structure of normal fault zones. Relationships between texture, pore type and acoustic velocity have been well documented in undeformed limestone. We have extended this work to include the effects of faulting on carbonate textures, pore types and P- and S-wave velocities (Vp, Vs) using a suite of normal fault zones in Malta, with displacements ranging from 0.5 to 90 m. Our results show a clear lithofacies control on the Vp-porosity and the Vs-Vp relationships for faulted limestones. Using porosity patterns quantified in naturally deformed rocks we have modelled their effect on the mechanical stability of fluid-saturated fault zones in the subsurface. Poroelasticity theory predicts that variations in fluid pressure could influence fault stability. Anisotropic patterns of porosity in and around fault zones can - depending on their orientation and intensity - lead to an increase in fault stability in response to a rise in fluid pressure, and a decrease in fault stability for a drop in fluid pressure. These predictions are the exact opposite of the accepted role of effective stress in fault stability. Our work has provided new data on the spatial and statistical variation of porosity in fault zones. Traditionally considered as an isotropic and scalar value, porosity and pore networks are better considered as anisotropic and as scale-dependent statistical distributions. The geological processes controlling the evolution of porosity are complex. Quantifying patterns of porosity variation is an essential first step in a wider quest to better understand deformation processes in and around normal fault zones. Understanding porosity patterns will help us to make more useful predictive tools for all agencies involved in the study and management of fluids in the subsurface.
Cholewicki, Jacek; van Dieën, Jaap; Lee, Angela S.; Reeves, N. Peter
2011-01-01
The problem with normalizing EMG data from patients with painful symptoms (e.g. low back pain) is that such patients may be unwilling or unable to perform maximum exertions. Furthermore, the normalization to a reference signal, obtained from a maximal or sub-maximal task, tends to mask differences that might exist as a result of pathology. Therefore, we presented a novel method (GAIN method) for normalizing trunk EMG data that overcomes both problems. The GAIN method does not require maximal exertions (MVC) and tends to preserve distinct features in the muscle recruitment patterns for various tasks. Ten healthy subjects performed various isometric trunk exertions, while EMG data from 10 muscles were recorded and later normalized using the GAIN and MVC methods. The MVC method resulted in smaller variation between subjects when tasks were executed at the three relative force levels (10%, 20%, and 30% MVC), while the GAIN method resulted in smaller variation between subjects when the tasks were executed at the three absolute force levels (50 N, 100 N, and 145 N). This outcome implies that the MVC method provides a relative measure of muscle effort, while the GAIN-normalized EMG data gives an estimate of the absolute muscle force. Therefore, the GAIN-normalized EMG data tends to preserve the EMG differences between subjects in the way they recruit their muscles to execute various tasks, while the MVC-normalized data will tend to suppress such differences. The appropriate choice of the EMG normalization method will depend on the specific question that an experimenter is attempting to answer. PMID:21665489
Crop Surveillance Demonstration Using a Near-Daily MODIS Derived Vegetation Index Time Series
NASA Technical Reports Server (NTRS)
McKellip, Rodney; Ryan, Robert E.; Blonski, Slawomir; Prados, Don
2005-01-01
Effective response to crop disease outbreaks requires rapid identification and diagnosis of an event. A near-daily vegetation index product, such as a Normalized Difference Vegetation Index (NDVI), at moderate spatial resolution may serve as a good method for monitoring quick-acting diseases. NASA s Moderate Resolution Imaging Spectroradiometer (MODIS) instrument flown on the Terra and Aqua satellites has the temporal, spatial, and spectral properties to make it an excellent coarse-resolution data source for rapid, comprehensive surveillance of agricultural areas. A proof-of-concept wide area crop surveillance system using daily MODIS imagery was developed and tested on a set of San Joaquin cotton fields over a growing season. This area was chosen in part because excellent ground truth data were readily available. Preliminary results indicate that, at least in the southwestern part of the United States, near-daily NDVI products can be generated that show the natural variations in the crops as well as specific crop practices. Various filtering methods were evaluated and compared with standard MOD13 NDVI MODIS products. We observed that specific chemical applications that produce defoliation, which would have been missed using the standard 16-day product, were easily detectable with the filtered daily NDVI products.
Varelis, P; Jeskelis, R
2008-10-01
For the determination of melamine and cyanuric acid the labelled internal standards [(13)C(3)]-melamine and [(13)C(3)]-cyanuric acid were synthesized using the common substrate [(13)C(3)]-cyanuric chloride by reaction with ammonia and acidified water, respectively. Standards with excellent isotopic and chemical purities were obtained in acceptable yields. These compounds were used to develop an isotope dilution liquid chromatography/mass spectrometry (LC/MS) method to determine melamine and cyanuric acid in catfish, pork, chicken, and pet food. The method involved extraction into aqueous methanol, liquid-liquid extraction and ion exchange solid phase clean-up, with normal phase high-performance liquid chromatography (HPLC) in the so-called hydrophilic interaction mode. The method had a limit of detection (LOD) of 10 microg kg(-1) for both melamine and cyanuric acid in the four foods with a percentage coefficient of variation (CV) of less than 10%. The recovery of the method at this level was in the range of 87-110% and 96-110% for melamine and cyanuric acid, respectively.
Protective clothing ensembles and physical employment standards.
McLellan, Tom M; Havenith, George
2016-06-01
Physical employment standards (PESs) exist for certain occupational groups that also require the use of protective clothing ensembles (PCEs) during their normal work. This review addresses whether these current PESs appropriately incorporate the physiological burden associated with wearing PCEs during respective tasks. Metabolic heat production increases because of wearing PCE; this increase is greater than that because of simply the weight of the clothing and can vary 2-fold among individuals. This variation negates a simple adjustment to the PES for the effect of the clothing on metabolic rate. As a result, PES testing that only simulates the weight of the clothing and protective equipment does not adequately accommodate this effect. The physiological heat strain associated with the use of PCEs is also not addressed with current PESs. Typically the selection tests of a PES lasts less than 20 min, whereas the requirement for use of PCE in the workplace may approach 1 h before cooling strategies can be employed. One option that might be considered is to construct a heat stress test that requires new recruits and incumbents to work for a predetermined duration while exposed to a warm environmental temperature while wearing the PCE.
Whiteley, Greg S; Derry, Chris; Glasbey, Trevor; Fahey, Paul
2015-06-01
To investigate the reliability of commercial ATP bioluminometers and to document precision and variability measurements using known and quantitated standard materials. Four commercially branded ATP bioluminometers and their consumables were subjected to a series of controlled studies with quantitated materials in multiple repetitions of dilution series. The individual dilutions were applied directly to ATP swabs. To assess precision and reproducibility, each dilution step was tested in triplicate or quadruplicate and the RLU reading from each test point was recorded. Results across the multiple dilution series were normalized using the coefficient of variation. The results for pure ATP and bacterial ATP from suspensions of Staphylococcus epidermidis and Pseudomonas aeruginosa are presented graphically. The data indicate that precision and reproducibility are poor across all brands tested. Standard deviation was as high as 50% of the mean for all brands, and in the field users are not provided any indication of this level of imprecision. The variability of commercial ATP bioluminometers and their consumables is unacceptably high with the current technical configuration. The advantage of speed of response is undermined by instrument imprecision expressed in the numerical scale of relative light units (RLU).
Tamburini, Elena; Mamolini, Elisabetta; De Bastiani, Morena; Marchetti, Maria Gabriella
2016-01-01
Fusarium proliferatum is considered to be a pathogen of many economically important plants, including garlic. The objective of this research was to apply near-infrared spectroscopy (NIRS) to rapidly determine fungal concentration in intact garlic cloves, avoiding the laborious and time-consuming procedures of traditional assays. Preventive detection of infection before seeding is of great interest for farmers, because it could avoid serious losses of yield during harvesting and storage. Spectra were collected on 95 garlic cloves, divided in five classes of infection (from 1-healthy to 5-very highly infected) in the range of fungal concentration 0.34–7231.15 ppb. Calibration and cross validation models were developed with partial least squares regression (PLSR) on pretreated spectra (standard normal variate, SNV, and derivatives), providing good accuracy in prediction, with a coefficient of determination (R2) of 0.829 and 0.774, respectively, a standard error of calibration (SEC) of 615.17 ppb, and a standard error of cross validation (SECV) of 717.41 ppb. The calibration model was then used to predict fungal concentration in unknown samples, peeled and unpeeled. The results showed that NIRS could be used as a reliable tool to directly detect and quantify F. proliferatum infection in peeled intact garlic cloves, but the presence of the external peel strongly affected the prediction reliability. PMID:27428978
Variability in Wechsler Adult Intelligence Scale-IV subtest performance across age.
Wisdom, Nick M; Mignogna, Joseph; Collins, Robert L
2012-06-01
Normal Wechsler Adult Intelligence Scale (WAIS)-IV performance relative to average normative scores alone can be an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs with increasing age. The purpose of the present study is to characterize the patterns of raw score change and associated variability on WAIS-IV subtests across age groupings. Raw WAIS-IV subtest means and standard deviations for each age group were tabulated from the WAIS-IV normative manual along with the coefficient of variation (CV), a measure of score dispersion calculated by dividing the standard deviation by the mean and multiplying by 100. The CV further informs the magnitude of variability represented by each standard deviation. Raw mean scores predictably decreased across age groups. Increased variability was noted in Perceptual Reasoning and Processing Speed Index subtests, as Block Design, Matrix Reasoning, Picture Completion, Symbol Search, and Coding had CV percentage increases ranging from 56% to 98%. In contrast, Working Memory and Verbal Comprehension subtests were more homogeneous with Digit Span, Comprehension, Information, and Similarities percentage of the mean increases ranging from 32% to 43%. Little change in the CV was noted on Cancellation, Arithmetic, Letter/Number Sequencing, Figure Weights, Visual Puzzles, and Vocabulary subtests (<14%). A thorough understanding of age-related subtest variability will help to identify test limitations as well as further our understanding of cognitive domains which remain relatively steady versus those which steadily decline.
2010-01-01
SIAH proteins are the human members of an highly conserved family of E3 ubiquitin ligases. Several data suggest that SIAH proteins may have a role in tumor suppression and apoptosis. Previously, we reported that SIAH-1 induces the degradation of Kid (KIF22), a chromokinesin protein implicated in the normal progression of mitosis and meiosis, by the ubiquitin proteasome pathway. In human breast cancer cells stably transfected with SIAH-1, Kid/KIF22 protein level was markedly reduced whereas, the Kid/KIF22 mRNA level was increased. This interaction has been further elucidated through analyzing SIAH and Kid/KIF22 expression in both paired normal and tumor tissues and cell lines. It was observed that SIAH-1 protein is widely expressed in different normal tissues, and in cells lines but showing some differences in western blotting profiles. Immunofluorescence microscopy shows that the intracellular distribution of SIAH-1 and Kid/KIF22 appears to be modified in human tumor tissues compared to normal controls. When mRNA expression of SIAH-1 and Kid/KIF22 was analyzed by real-time PCR in normal and cancer breast tissues from the same patient, a large variation in the number of mRNA copies was detected between the different samples. In most cases, SIAH-1 mRNA is decreased in tumor tissues compared to their normal counterparts. Interestingly, in all breast tumor tissues analyzed, variations in the Kid/KIF22 mRNA levels mirrored those seen with SIAH-1 mRNAs. This concerted variation of SIAH-1 and Kid/KIF22 messengers suggests the existence of an additional level of control than the previously described protein-protein interaction and protein stability regulation. Our observations also underline the need to re-evaluate the results of gene expression obtained by qRT-PCR and relate it to the protein expression and cellular localization when matched normal and tumoral tissues are analyzed. PMID:20144232
Quantifying hypoxia in human cancers using static PET imaging.
Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G; Milosevic, Michael; Hedley, David W; Jaffray, David A
2016-11-21
Compared to FDG, the signal of 18 F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties-well-perfused without substantial necrosis or partitioning-for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in 'inter-corporal' transport properties-blood volume and clearance rate-as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3 , a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.
Quantifying hypoxia in human cancers using static PET imaging
NASA Astrophysics Data System (ADS)
Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G.; Milosevic, Michael; Hedley, David W.; Jaffray, David A.
2016-11-01
Compared to FDG, the signal of 18F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties—well-perfused without substantial necrosis or partitioning—for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in ‘inter-corporal’ transport properties—blood volume and clearance rate—as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3, a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.
A Gaussian method to improve work-of-breathing calculations.
Petrini, M F; Evans, J N; Wall, M A; Norman, J R
1995-01-01
The work of breathing is a calculated index of pulmonary function in ventilated patients that may be useful in deciding when to wean and when to extubate. However, the accuracy of the calculated work of breathing of the patient (WOBp) can suffer from artifacts introduced by coughing, swallowing, and other non-breathing maneuvers. The WOBp in this case will include not only the usual work of inspiration, but also the work of performing these non-breathing maneuvers. The authors developed a method to objectively eliminate the calculated work of these movements from the work of breathing, based on fitting to a Gaussian curve the variable P, which is obtained from the difference between the esophageal pressure change and the airway pressure change during each breath. In spontaneously breathing adults the normal breaths fit the Gaussian curve, while breaths that contain non-breathing maneuvers do not. In this Gaussian breath-elimination method (GM), breaths that are two standard deviations from that mean obtained by the fit are eliminated. For normally breathing control adult subjects, GM had little effect on WOBp, reducing it from 0.49 to 0.47 J/L (n = 8), while there was a 40% reduction in the coefficient of variation. Non-breathing maneuvers were simulated by coughing, which increased WOBp to 0.88 (n = 6); with the GM correction, WOBp was 0.50 J/L, a value not significantly different from that of normal breathing. Occlusion also increased WOBp to 0.60 J/L, but GM-corrected WOBp was 0.51 J/L, a normal value. As predicted, doubling the respiratory rate did not change the WOBp before or after the GM correction.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Technical Reports Server (NTRS)
Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander
2010-01-01
Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.
NASA Astrophysics Data System (ADS)
Rosa, P. T.; Fontes-Pereira, A. J.; Matusin, D. P.; von Krüger, M. A.; Pereira, W. C. A.
Bone healing is a complex process that stars after the occurrence of a fracture to restore bone optimal conditions. The gold standards for bone status evaluation are the dual energy X-ray absorptiometry and the computerized tomography. Ultrasound-based technologies have some advantages as compared to X-ray technologies: nonionizing radiation, portability and lower cost among others. Quantitative ultrasound (QUS) has been proposed in literature as a new tool to follow up the fracture healing process. QUS relates the ultrasound propagation with the bone tissue condition (normal or pathological), so, a change in wave propagation may indicate a variation in tissue properties. The most used QUS parameters are time-of-flight (TOF) and sound pressure level (SPL) of the first arriving signal (FAS). In this work, the FAS is the well known lateral wave. The aim of this work is to evaluate the relation of the TOF and SPL of the FAS and fracture inclination trace in two stages of bone healing using computational simulations. Four fracture geometries were used: normal and oblique with 30, 45 and 60 degrees. The TOF average values were 63.23 μs, 63.14 μs, 63.03 μs 62.94 μs for normal, 30, 45 and 60 degrees respectively and average SPL values were -3.83 dB -4.32 dB, -4.78 dB, -6.19 dB for normal, 30, 45 and 60 degrees respectively. The results show an inverse pattern between the amplitude and time-of-flight. These values seem to be sensible to fracture inclination trace, and in future, can be used to characterize it.
Brain extraction from normal and pathological images: A joint PCA/Image-Reconstruction approach.
Han, Xu; Kwitt, Roland; Aylward, Stephen; Bakas, Spyridon; Menze, Bjoern; Asturias, Alexander; Vespa, Paul; Van Horn, John; Niethammer, Marc
2018-08-01
Brain extraction from 3D medical images is a common pre-processing step. A variety of approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images exhibiting strong pathologies, for example, the presence of a brain tumor or of a traumatic brain injury (TBI), is challenging. In such cases, tissue appearance may substantially deviate from normal tissue appearance and hence violates algorithmic assumptions for standard approaches to brain extraction; consequently, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue appearance and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis (PCA), (2) pathologies are captured via a total variation term, and (3) the skull and surrounding tissue is captured by a sparsity term. Due to its convexity, the resulting decomposition model allows for efficient optimization. Decomposition and image registration steps are alternated to allow statistical modeling of normal tissue appearance in a fixed atlas coordinate system. As a beneficial side effect, the decomposition model allows for the identification of potentially pathological areas and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our approach on four datasets: the publicly available IBSR and LPBA40 datasets which show normal image appearance, the BRATS dataset containing images with brain tumors, and a dataset containing clinical TBI images. We compare the performance with other popular brain extraction models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing approaches on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction for a wide variety of images. Copyright © 2018 Elsevier Inc. All rights reserved.
Shakal, A.; Haddadi, H.; Graizer, V.; Lin, K.; Huang, M.
2006-01-01
The 2004 Parkfield, California, earthquake was recorded by an extensive set of strong-motion instruments well positioned to record details of the motion in the near-fault region, where there has previously been very little recorded data. The strong-motion measurements obtained are highly varied, with significant variations occurring over only a few kilometers. The peak accelerations in the near fault region range from 0.13g to over 1.8g (one of the highest acceleration recorded to date, exceeding the capacity of the recording instrument The largest accelerations occurred near the northwest end of the inferred rupture zone. These motions are consistent with directivity for a fault rupturing from the hypocenter near Gold Hill toward the northwest. However, accelerations up to 0.8g were also observed in the opposite direction, at the south end of the Cholame Valley near Highway 41, consistent with bilateral rupture, with rupture southeast of the hypocenter. Several stations near and over the rupturing fault recorded relatively weak motions, consistent with seemingly paradoxical observations of low shaking damage near strike-slip faults. This event had more ground-motion observations within 10 km of the fault than many other earthquakes combined. At moderate distances peak horizontal ground acceleration (PGA) values dropped off more rapidly with distance than standard relationships. At close-in distance the wide variation of PGA suggests a distance-dependent sigma may be important to consider. The near-fault ground-motion variation is greater than that assumed in ShakeMap interpolations, based on the existing set of observed data. Higher density of stations near faults may be the only means in the near future to reduce uncertainty in the interpolations. Outside of the near-fault zone the variance is closer to that assumed. This set of data provides the first case where near-fault radiation has been observed at an adequate number of stations around the fault to allow detailed study of the fault-normal and fault-parallel motion and the near-field S-wave radiation. The fault-normal motions are significant, but they are not large at the central part of the fault, away from the ends. The fault-normal and fault-parallel motions drop off quite rapidly with distance from the fault. Analysis of directivity indicates increased values of peak velocity in the rupture direction. No such dependence is observed in the peak acceleration, except for stations close to the strike of the fault near and beyond the ends of the faulting.
Marui, Shuri; Misawa, Ayaka; Tanaka, Yuki; Nagashima, Kei
2017-02-22
The aims of this study were to (1) evaluate whether recently introduced methods of measuring axillary temperature are reliable, (2) examine if individuals know their baseline body temperature based on an actual measurement, and (3) assess the factors affecting axillary temperature and reevaluate the meaning of the axillary temperature. Subjects were healthy young men and women (n = 76 and n = 65, respectively). Three measurements were obtained: (1) axillary temperature using a digital thermometer in a predictive mode requiring 10 s (T ax-10 s ), (2) axillary temperature using a digital thermometer in a standard mode requiring 10 min (T ax-10 min ), and (3) tympanic membrane temperature continuously measured by infrared thermometry (T ty ). The subjects answered questions about eating and exercise habits, sleep and menstrual cycles, and thermoregulation and reported what they believed their regular body temperature to be (T reg ). T reg , T ax-10 s , T ax-10 min , and T ty were 36.2 ± 0.4, 36.4 ± 0.5, 36.5 ± 0.4, and 36.8 ± 0.3 °C (mean ± SD), respectively. There were correlations between T ty and T ax-10 min , T ty and T ax-10 s , and T ax-10 min and T ax-10 s (r = .62, r = .46, and r = .59, respectively, P < .001), but not between T reg and T ax-10 s (r = .11, P = .20). A lower T ax-10 s was associated with smaller body mass indices and irregular menstrual cycles. Modern devices for measuring axillary temperature may have changed the range of body temperature that is recognized as normal. Core body temperature variations estimated by tympanic measurements were smaller than those estimated by axillary measurements. This variation of axillary temperature may be due to changes in the measurement methods introduced by modern devices and techniques. However, axillary temperature values correlated well with those of tympanic measurements, suggesting that the technique may reliably report an individual's state of health. It is important for individuals to know their baseline axillary temperature to evaluate subsequent temperature measurements as normal or abnormal. Moreover, axillary temperature variations may, in part, reflect fat mass and changes due to the menstrual cycle.
A Quantitative Study of Simulated Bicuspid Aortic Valves
NASA Astrophysics Data System (ADS)
Szeto, Kai; Nguyen, Tran; Rodriguez, Javier; Pastuszko, Peter; Nigam, Vishal; Lasheras, Juan
2010-11-01
Previous studies have shown that congentially bicuspid aortic valves develop degenerative diseases earlier than the standard trileaflet, but the causes are not well understood. It has been hypothesized that the asymmetrical flow patterns and turbulence found in the bileaflet valves together with abnormally high levels of strain may result in an early thickening and eventually calcification and stenosis. Central to this hypothesis is the need for a precise quantification of the differences in the strain rate levels between bileaflets and trileaflet valves. We present here some in-vitro dynamic measurements of the spatial variation of the strain rate in pig aortic vales conducted in a left ventricular heart flow simulator device. We measure the strain rate of each leaflet during the whole cardiac cycle using phase-locked stereoscopic three-dimensional image surface reconstruction techniques. The bicuspid case is simulated by surgically stitching two of the leaflets in a normal valve.
Fluorescent Cell Barcoding for Multiplex Flow Cytometry
Krutzik, Peter O.; Clutter, Matthew R.; Trejo, Angelica; Nolan, Garry P.
2011-01-01
Fluorescent Cell Barcoding (FCB) enables high throughput, i.e. high content flow cytometry by multiplexing samples prior to staining and acquisition on the cytometer. Individual cell samples are barcoded, or labeled, with unique signatures of fluorescent dyes so that they can be mixed together, stained, and analyzed as a single sample. By mixing samples prior to staining, antibody consumption is typically reduced 10 to 100-fold. In addition, data robustness is increased through the combination of control and treated samples, which minimizes pipetting error, staining variation, and the need for normalization. Finally, speed of acquisition is enhanced, enabling large profiling experiments to be run with standard cytometer hardware. In this unit, we outline the steps necessary to apply the FCB method to cell lines as well as primary peripheral blood samples. Important technical considerations such as choice of barcoding dyes, concentrations, labeling buffers, compensation, and software analysis are discussed. PMID:21207359
Fusion yield: Guderley model and Tsallis statistics
NASA Astrophysics Data System (ADS)
Haubold, H. J.; Kumar, D.
2011-02-01
The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.
Atmospheric Delta14C Record from Wellington (1954-1993)
Manning, M R. [National Institute of Water and Atmospheric Research, Ltd., Lower Hutt, New Zealand; Melhuish, W. H. [National Institute of Water and Atmospheric Research, Ltd., Lower Hutt, New Zealand
1994-09-01
Trays containing ~2 L of 5 normal NaOH carbonate-free solution are typically exposed for intervals of 1-2 weeks, and the atmospheric CO2 absorbed during that time is recovered by acid evolution. Considerable fractionation occurs during absorption into the NaOH solution, and the standard fractionation correction (Stuiver and Polach 1977) is used to determine a δ 14C value corrected to δ 13C = -25 per mil. Some samples reported here were taken using BaOH solution or with extended tray exposure times. These variations in procedure do not appear to affect the results (Manning et al. 1990). A few early measurements were made by bubbling air through columns of NaOH for several hours. These samples have higher δ 13C values. Greater details on the sampling methods are provided in Manning et al. (1990) and Rafter and Fergusson (1959).
Simultaneous measurements of L- and S-band tree shadowing for space-Earth communications
NASA Technical Reports Server (NTRS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.; Lin, Hsin P.
1995-01-01
We present results from simultaneous L- and S-Band slant-path fade measurements through trees. One circularly-polarized antenna was used at each end of the dual-frequency link to provide information on the correlation of tree shadowing at 1620 and 2500 MHz. Fades were measured laterally in the shadow region with 5 cm spacing. Fade differences between L- and S-Band had a normal distribution with low means and standard deviations from 5.2 to 7.5 dB. Spatial variations occurred with periods larger than 1-2 wavelengths. Swept measurements over 160 MHz spans showed that the stdv. of power as function of frequency increased from approximately 1-6 dB at locations with mean fades of 4 and 20 dB, respectively. At a 5 dB fade, the central 90% of fade slopes were within a range of 0.7 (1.9) dB/MHz at L-(S-) Band.
Whole Genome Analysis of a Wine Yeast Strain
Hauser, Nicole C.; Fellenberg, Kurt; Gil, Rosario; Bastuck, Sonja; Hoheisel, Jörg D.
2001-01-01
Saccharomyces cerevisiae strains frequently exhibit rather specific phenotypic features needed for adaptation to a special environment. Wine yeast strains are able to ferment musts, for example, while other industrial or laboratory strains fail to do so. The genetic differences that characterize wine yeast strains are poorly understood, however. As a first search of genetic differences between wine and laboratory strains, we performed DNA-array analyses on the typical wine yeast strain T73 and the standard laboratory background in S288c. Our analysis shows that even under normal conditions, logarithmic growth in YPD medium, the two strains have expression patterns that differ significantly in more than 40 genes. Subsequent studies indicated that these differences correlate with small changes in promoter regions or variations in gene copy number. Blotting copy numbers vs. transcript levels produced patterns, which were specific for the individual strains and could be used for a characterization of unknown samples. PMID:18628902
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
2017-01-01
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
Jones, Jasmin Niedo; Berninger, Virginia Wise
2016-01-01
Three new approaches to writing assessment are introduced. First, strategies for generating the very next sentence are assessed in reference to the local level as well as the evolving text level of composing in progress. Second, strategies for translating thought into written language are coded with transcription (spelling) skill—low, average, or high—held constant. Third, instead of describing composing skill in reference to a single normed score for age or grade in a standardization sample at a static time in development, translation is studied longitudinally when children are in grades 1, 3, and 5 (ages 6, 8, 10) or grades 3, 5, and 7 (ages 8, 10, 12). Applications of the results are discussed for assessment and instruction grounded in levels and generativity of written language and normal variation in typically developing writers. PMID:28127525
Rentz, Dorene M; Parra Rodriguez, Mario A; Amariglio, Rebecca; Stern, Yaakov; Sperling, Reisa; Ferris, Steven
2013-01-01
Recently published guidelines suggest that the most opportune time to treat individuals with Alzheimer's disease is during the preclinical phase of the disease. This is a phase when individuals are defined as clinically normal but exhibit evidence of amyloidosis, neurodegeneration and subtle cognitive/behavioral decline. While our standard cognitive tests are useful for detecting cognitive decline at the stage of mild cognitive impairment, they were not designed for detecting the subtle cognitive variations associated with this biomarker stage of preclinical Alzheimer's disease. However, neuropsychologists are attempting to meet this challenge by designing newer cognitive measures and questionnaires derived from translational efforts in neuroimaging, cognitive neuroscience and clinical/experimental neuropsychology. This review is a selective summary of several novel, potentially promising, approaches that are being explored for detecting early cognitive evidence of preclinical Alzheimer's disease in presymptomatic individuals.