The prior statistics of object colors.
Koenderink, Jan J
2010-02-01
The prior statistics of object colors is of much interest because extensive statistical investigations of reflectance spectra reveal highly non-uniform structure in color space common to several very different databases. This common structure is due to the visual system rather than to the statistics of environmental structure. Analysis involves an investigation of the proper sample space of spectral reflectance factors and of the statistical consequences of the projection of spectral reflectances on the color solid. Even in the case of reflectance statistics that are translationally invariant with respect to the wavelength dimension, the statistics of object colors is highly non-uniform. The qualitative nature of this non-uniformity is due to trichromacy.
Statistical analysis of tire treadwear data
DOT National Transportation Integrated Search
1985-03-01
This report describes the results of a statistical analysis of the treadwear : variability of radial tires subjected to the Uniform Tire Quality Grading (UTQG) : standard. Because unexplained variability in the treadwear portion of the standard : cou...
Safety Management Information Statistics (SAMIS) - 1993 Annual Report
DOT National Transportation Integrated Search
1995-05-01
The 1993 Safety Management Information Statistics (SAMIS) report, now in its fourth year of publication, is a compilation and analysis of transit accident and casualty statistics uniformly collected from approximately 400 transit agencies throughout ...
Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.
The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)
A new statistic for the analysis of circular data in gamma-ray astronomy
NASA Technical Reports Server (NTRS)
Protheroe, R. J.
1985-01-01
A new statistic is proposed for the analysis of circular data. The statistic is designed specifically for situations where a test of uniformity is required which is powerful against alternatives in which a small fraction of the observations is grouped in a small range of directions, or phases.
Restoration of MRI data for intensity non-uniformities using local high order intensity statistics
Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert
2008-01-01
MRI at high magnetic fields (>3.0 T) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to non-biological intensity non-uniformities across the image. They can complicate further image analysis such as registration and tissue segmentation. Existing methods for intensity uniformity restoration have been optimized for 1.5 T, but they are less effective for 3.0 T MRI, and not at all satisfactory for higher fields. Also, many of the existing restoration algorithms require a brain template or use a prior atlas, which can restrict their practicalities. In this study an effective intensity uniformity restoration algorithm has been developed based on non-parametric statistics of high order local intensity co-occurrences. These statistics are restored with a non-stationary Wiener filter. The algorithm also assumes a smooth non-uniformity and is stable. It does not require a prior atlas and is robust to variations in anatomy. In geriatric brain imaging it is robust to variations such as enlarged ventricles and low contrast to noise ratio. The co-occurrence statistics improve robustness to whole head images with pronounced non-uniformities present in high field acquisitions. Its significantly improved performance and lower time requirements have been demonstrated by comparing it to the very commonly used N3 algorithm on BrainWeb MR simulator images as well as on real 4 T human head images. PMID:18621568
Lukášová, Ivana; Muselík, Jan; Franc, Aleš; Goněc, Roman; Mika, Filip; Vetchý, David
2017-11-15
Warfarin is intensively discussed drug with narrow therapeutic range. There have been cases of bleeding attributed to varying content or altered quality of the active substance. Factor analysis is useful for finding suitable technological parameters leading to high content uniformity of tablets containing low amount of active substance. The composition of tabletting blend and technological procedure were set with respect to factor analysis of previously published results. The correctness of set parameters was checked by manufacturing and evaluation of tablets containing 1-10mg of warfarin sodium. The robustness of suggested technology was checked by using "worst case scenario" and statistical evaluation of European Pharmacopoeia (EP) content uniformity limits with respect to Bergum division and process capability index (Cpk). To evaluate the quality of active substance and tablets, dissolution method was developed (water; EP apparatus II; 25rpm), allowing for statistical comparison of dissolution profiles. Obtained results prove the suitability of factor analysis to optimize the composition with respect to batches manufactured previously and thus the use of metaanalysis under industrial conditions is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling of non-uniform spatial arrangement of fibers in a ceramic matrix composite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, S.; Tewari, A.; Gokhale, A.M.
In the unidirectional fiber reinforced composites, the spatial agreement of fibers is often non-uniform. These non-uniformities are linked to the processing conditions, and they affect the properties of the composite. In this contribution, a recently developed digital image analysis technique is used to quantify the non-uniform spatial arrangement of Nicalon fibers in a ceramic matrix composite (CMC). These quantitative data are utilized to develop a six parameter computer simulated microstructure model that is statistically equivalent to the non-uniform microstructure of the CMC. The simulated microstructure can be utilized as a RVE for the micro-mechanical modeling studies.
Hofer, Jeffrey D; Rauk, Adam P
2017-02-01
The purpose of this work was to develop a straightforward and robust approach to analyze and summarize the ability of content uniformity data to meet different criteria. A robust Bayesian statistical analysis methodology is presented which provides a concise and easily interpretable visual summary of the content uniformity analysis results. The visualization displays individual batch analysis results and shows whether there is high confidence that different content uniformity criteria could be met a high percentage of the time in the future. The 3 tests assessed are as follows: (a) United States Pharmacopeia Uniformity of Dosage Units <905>, (b) a specific ASTM E2810 Sampling Plan 1 criterion to potentially be used for routine release testing, and (c) another specific ASTM E2810 Sampling Plan 2 criterion to potentially be used for process validation. The approach shown here could readily be used to create similar result summaries for other potential criteria. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Uniformity testing: assessment of a centralized web-based uniformity analysis system.
Klempa, Meaghan C
2011-06-01
Uniformity testing is performed daily to ensure adequate camera performance before clinical use. The aim of this study is to assess the reliability of Beth Israel Deaconess Medical Center's locally built, centralized, Web-based uniformity analysis system by examining the differences between manufacturer and Web-based National Electrical Manufacturers Association integral uniformity calculations measured in the useful field of view (FOV) and the central FOV. Manufacturer and Web-based integral uniformity calculations measured in the useful FOV and the central FOV were recorded over a 30-d period for 4 cameras from 3 different manufacturers. These data were then statistically analyzed. The differences between the uniformity calculations were computed, in addition to the means and the SDs of these differences for each head of each camera. There was a correlation between the manufacturer and Web-based integral uniformity calculations in the useful FOV and the central FOV over the 30-d period. The average differences between the manufacturer and Web-based useful FOV calculations ranged from -0.30 to 0.099, with SD ranging from 0.092 to 0.32. For the central FOV calculations, the average differences ranged from -0.163 to 0.055, with SD ranging from 0.074 to 0.24. Most of the uniformity calculations computed by this centralized Web-based uniformity analysis system are comparable to the manufacturers' calculations, suggesting that this system is reasonably reliable and effective. This finding is important because centralized Web-based uniformity analysis systems are advantageous in that they test camera performance in the same manner regardless of the manufacturer.
14 CFR Section 19 - Uniform Classification of Operating Statistics
Code of Federal Regulations, 2011 CFR
2011-01-01
... Statistics Section 19 Section 19 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... AIR CARRIERS Operating Statistics Classifications Section 19 Uniform Classification of Operating Statistics ...
14 CFR Section 19 - Uniform Classification of Operating Statistics
Code of Federal Regulations, 2010 CFR
2010-01-01
... Statistics Section 19 Section 19 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... AIR CARRIERS Operating Statistics Classifications Section 19 Uniform Classification of Operating Statistics ...
NASA Astrophysics Data System (ADS)
Boning, Duane S.; Chung, James E.
1998-11-01
Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
[Labour factors associated with post-traumatic stress in uniformed workers in Medellín].
González-Penagos, Catalina; Moreno-Bedoya, Juan P; Berbesi-Fernández, Dedsy Y; Segura-Cardona, Angela M
2013-01-01
Determining the labor factors associated with post-traumatic stress in uniformed workers in Medellin. A cross-sectional study was made of 124 uniformed workers aged 20 to 48 years-old. A survey was made using an adults' post-traumatic stress instrument which had been validated in Medellin. Statistical analysis was carried out. Post-traumatic stress disorder risk prevalence was 52.2 %. Multivariate analysis showed that the highest risk situations were those related to previous mental health diagnosis (PR=7.67), working schedule (4.24), violent episodes (PR=3.59) and community relationships (PR=2.73). A person's current labor situation seemed to be a risk factor for developing post-traumatic stress in the target population.
Application of Statistically Derived CPAS Parachute Parameters
NASA Technical Reports Server (NTRS)
Romero, Leah M.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.
Climate science: Breaks in trends
NASA Astrophysics Data System (ADS)
Pretis, Felix; Allen, Myles
2013-12-01
Global temperature rise since industrialization has not been uniform. A statistical analysis suggests that past changes in the rate of warming can be directly attributed to human influences, from economic downturns to the regulations of the Montreal Protocol.
Heartwood and sapwood in eucalyptus trees: non-conventional approach to wood quality.
Cherelli, Sabrina G; Sartori, Maria Márcia P; Próspero, André G; Ballarin, Adriano W
2018-01-01
This study evaluated the quality of heartwood and sapwood from mature trees of three species of Eucalyptus, by means of the qualification of their proportion, determination of basic and apparent density using non-destructive attenuation of gamma radiation technique and calculation of the density uniformity index. Six trees of each species (Eucalyptus grandis - 18 years old, Eucalyptus tereticornis - 35 years old and Corymbia citriodora - 28 years old) were used in the experimental program. The heartwood and sapwood were delimited by macroscopic analysis and the calculation of areas and percentage of heartwood and sapwood were performed using digital image. The uniformity index was calculated following methodology which numerically quantifies the dispersion of punctual density values of the wood around the mean density along the radius. The percentage of the heartwood was higher than the sapwood in all species studied. The density results showed no statistical difference between heartwood and sapwood. Differently from the density results, in all species studied there was statistical differences between uniformity indexes for heartwood and sapwood regions, making justifiable the inclusion of the density uniformity index as a quality parameter for Eucalyptus wood.
The vulnerability of electric equipment to carbon fibers of mixed lengths: An analysis
NASA Technical Reports Server (NTRS)
Elber, W.
1980-01-01
The susceptibility of a stereo amplifier to damage from a spectrum of lengths of graphite fibers was calculated. A simple analysis was developed by which such calculations can be based on test results with fibers of uniform lengths. A statistical analysis was applied for the conversation of data for various logical failure criteria.
A statistical analysis of the effects of a uniform minimum drinking age
DOT National Transportation Integrated Search
1987-04-01
This report examines the relationship between minimum drinking age (MDA) and : highway fatalities during the 1975-1985 period, when 35 states changed their : MDAs. An econometric model of fatalities involving the 18-20 year-old driver : normalized by...
DOT National Transportation Integrated Search
1985-09-01
This report examines the groove wear variability among tires subjected to the : Uniform Tire Quality Grading (UTQC) test procedure for determining tire tread wear. : The effects of heteroscedasticity (variable variance) on a previously reported : sta...
ClinicAl Evaluation of Dental Restorative Materials
1989-01-01
use of an Atuarial Life Table Survival Analysis procedure. The median survival time for anterior composites was 13.5 years, as compared to 12.1 years...dental materials. For the first time in clinical biomaterials research, we used a statistical approach of Survival Analysis which utilized the... analysis has been established to assure uniformity in usage. This scale is now in use by clinical investigators throughout the country. Its use at the
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Pabon, Peter; Ternström, Sten; Lamarche, Anick
2011-06-01
To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.
Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding
2018-01-01
Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669
An Economic Analysis of the Demand for State and Local Government Employees.
ERIC Educational Resources Information Center
Ehrenberg, Ronald G.
This study presents estimates of the wage elasticities of demand for state and local government employees. Almost uniformly each functional category of state and local government employee's employment level is shown to be statistically significantly negatively related to the category real and relative wage level. However, the magnitude of these…
Some limit theorems for ratios of order statistics from uniform random variables.
Xu, Shou-Fang; Miao, Yu
2017-01-01
In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.
Chromosome aberration analysis in peripheral lymphocytes of Gulf War and Balkans War veterans.
Schröder, H; Heimers, A; Frentzel-Beyme, R; Schott, A; Hoffmann, W
2003-01-01
Chromosome aberrations and sister chromatid exchanges (SCEs) were determined in standard peripheral lymphocyte metaphase preparations of 13 British Gulf War veterans, two veterans of the recent war in the Balkans and one veteran of both wars. All 16 volunteers suspect exposures to depleted uranium (DU) while deployed at the two different theatres of war in 1990 and later on. The Bremen laboratory control served as a reference in this study. Compared with this control there was a statistically significant increase in the frequency of dicentric chromosomes (dic) and centric ring chromosomes (cR) in the veterans' group. indicating a previous exposure to ionising radiation. The statistically significant overdispersion of die and cR indicates non-uniform irradiation as would be expected after non-uniform exposure and/or exposure to radiation with a high linear energy transfer (LET). The frequency of SCEs was decreased when compared with the laboratory control.
Evaluation of illumination system uniformity for wide-field biomedical hyperspectral imaging
NASA Astrophysics Data System (ADS)
Sawyer, Travis W.; Siri Luthman, A.; E Bohndiek, Sarah
2017-04-01
Hyperspectral imaging (HSI) systems collect both spatial (morphological) and spectral (chemical) information from a sample. HSI can provide sensitive analysis for biological and medical applications, for example, simultaneously measuring reflectance and fluorescence properties of a tissue, which together with structural information could improve early cancer detection and tumour characterisation. Illumination uniformity is a critical pre-condition for quantitative data extraction from an HSI system. Non-uniformity can cause glare, specular reflection and unwanted shading, which negatively impact statistical analysis procedures used to extract abundance of different chemical species. Here, we model and evaluate several illumination systems frequently used in wide-field biomedical imaging to test their potential for HSI. We use the software LightTools and FRED. The analysed systems include: a fibre ring light; a light emitting diode (LED) ring; and a diffuse scattering dome. Each system is characterised for spectral, spatial, and angular uniformity, as well as transfer efficiency. Furthermore, an approach to measure uniformity using the Kullback-Leibler divergence (KLD) is introduced. The KLD is generalisable to arbitrary illumination shapes, making it an attractive approach for characterising illumination distributions. Although the systems are quite comparable in their spatial and spectral uniformity, the most uniform angular distribution is achieved using a diffuse scattering dome, yielding a contrast of 0.503 and average deviation of 0.303 over a ±60° field of view with a 3.9% model error in the angular domain. Our results suggest that conventional illumination sources can be applied in HSI, but in the case of low light levels, bespoke illumination sources may offer improved performance.
A Science and Risk-Based Pragmatic Methodology for Blend and Content Uniformity Assessment.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Doshi, Chetan
2018-04-01
This paper describes a pragmatic approach that can be applied in assessing powder blend and unit dosage uniformity of solid dose products at Process Design, Process Performance Qualification, and Continued/Ongoing Process Verification stages of the Process Validation lifecycle. The statistically based sampling, testing, and assessment plan was developed due to the withdrawal of the FDA draft guidance for industry "Powder Blends and Finished Dosage Units-Stratified In-Process Dosage Unit Sampling and Assessment." This paper compares the proposed Grouped Area Variance Estimate (GAVE) method with an alternate approach outlining the practicality and statistical rationalization using traditional sampling and analytical methods. The approach is designed to fit solid dose processes assuring high statistical confidence in both powder blend uniformity and dosage unit uniformity during all three stages of the lifecycle complying with ASTM standards as recommended by the US FDA.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
Self-organization of cosmic radiation pressure instability. II - One-dimensional simulations
NASA Technical Reports Server (NTRS)
Hogan, Craig J.; Woods, Jorden
1992-01-01
The clustering of statistically uniform discrete absorbing particles moving solely under the influence of radiation pressure from uniformly distributed emitters is studied in a simple one-dimensional model. Radiation pressure tends to amplify statistical clustering in the absorbers; the absorbing material is swept into empty bubbles, the biggest bubbles grow bigger almost as they would in a uniform medium, and the smaller ones get crushed and disappear. Numerical simulations of a one-dimensional system are used to support the conjecture that the system is self-organizing. Simple statistics indicate that a wide range of initial conditions produce structure approaching the same self-similar statistical distribution, whose scaling properties follow those of the attractor solution for an isolated bubble. The importance of the process for large-scale structuring of the interstellar medium is briefly discussed.
Spectral statistics of the uni-modular ensemble
NASA Astrophysics Data System (ADS)
Joyner, Christopher H.; Smilansky, Uzy; Weidenmüller, Hans A.
2017-09-01
We investigate the spectral statistics of Hermitian matrices in which the elements are chosen uniformly from U(1) , called the uni-modular ensemble (UME), in the limit of large matrix size. Using three complimentary methods; a supersymmetric integration method, a combinatorial graph-theoretical analysis and a Brownian motion approach, we are able to derive expressions for 1 / N corrections to the mean spectral moments and also analyse the fluctuations about this mean. By addressing the same ensemble from three different point of view, we can critically compare their relative advantages and derive some new results.
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki
2017-01-01
Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073
Rigor Mortis: Statistical thoroughness in reporting and the making of truth.
Tal, Aner
2016-02-01
Should a uniform checklist be adopted for methodological and statistical reporting? The current article discusses this notion, with particular attention to the use of old versus new statistics, and a consideration of the arguments brought up by Von Roten. The article argues that an overly exhaustive checklist that is uniformly applied to all submitted papers may be unsuitable for multidisciplinary work, and would further result in undue clutter and potentially distract reviewers from pertinent considerations in their evaluation of research articles. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Hozé, Nathanaël; Holcman, David
2012-01-01
We develop a coagulation-fragmentation model to study a system composed of a small number of stochastic objects moving in a confined domain, that can aggregate upon binding to form local clusters of arbitrary sizes. A cluster can also dissociate into two subclusters with a uniform probability. To study the statistics of clusters, we combine a Markov chain analysis with a partition number approach. Interestingly, we obtain explicit formulas for the size and the number of clusters in terms of hypergeometric functions. Finally, we apply our analysis to study the statistical physics of telomeres (ends of chromosomes) clustering in the yeast nucleus and show that the diffusion-coagulation-fragmentation process can predict the organization of telomeres.
Optimization of technological equipment used in the laser-radiation hardening of instruments
NASA Astrophysics Data System (ADS)
Tverdokhlebov, G. N.; Maznichenko, S. A.
Results of a statistical analysis of an instrument intended for laser hardening are presented. The kinematics of the positioning and fastening of an instrument for uniform laser-pulse treatment is analyzed. The results are used to devise an automatic device and the procedure for laser treatment under optimized conditions of various rotary cutting instruments, such as milling cutters, drills, and counterbores.
Detector Development for the MARE Neutrino Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galeazzi, M.; Bogorin, D.; Molina, R.
2009-12-16
The MARE experiment is designed to measure the mass of the neutrino with sub-eV sensitivity by measuring the beta decay of {sup 187}Re with cryogenic microcalorimeters. A preliminary analysis shows that, to achieve the necessary statistics, between 10,000 and 50,000 detectors are likely necessary. We have fabricated and characterized Iridium transition edge sensors with high reproducibility and uniformity for such a large scale experiment. We have also started a full scale simulation of the experimental setup for MARE, including thermalization in the absorber, detector response, and optimum filter analysis, to understand the issues related to reaching a sub-eV sensitivity andmore » to optimize the design of the MARE experiment. We present our characterization of the Ir devices, including reproducibility, uniformity, and sensitivity, and we discuss the implementation and capabilities of our full scale simulation.« less
1994-06-30
tip Opening Displacement (CTOD) Fracture Toughness Measurement". 48 The method has found application in the elastic-plastic fracture mechanics ( EPFM ...68 6.1 Proposed Material Property Database Format and Hierarchy .............. 68 6.2 Sample Application of the Material Property Database...the E 49.05 sub-committee. The relevant quality indicators applicable to the present program are: source of data, statistical basis of data
NASA Astrophysics Data System (ADS)
Moore, Todd W.
2016-07-01
Tropical cyclones often produce tornadoes that have the potential to compound the injury and fatality counts and the economic losses associated with tropical cyclones. These tornadoes do not occur uniformly through time or across space. Multiple statistical methods were used in this study to analyze the association between tropical cyclone intensity change and tornado frequency. Results indicate that there is an association between the two and that tropical cyclones tend to produce more tornadoes when they are weakening, but the association is weak. Tropical cyclones can also produce a substantial number of tornadoes when they are relatively stable or strengthening.
Estimating the proportion of true null hypotheses when the statistics are discrete.
Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S
2015-07-15
In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Impact of ontology evolution on functional analyses.
Groß, Anika; Hartung, Michael; Prüfer, Kay; Kelso, Janet; Rahm, Erhard
2012-10-15
Ontologies are used in the annotation and analysis of biological data. As knowledge accumulates, ontologies and annotation undergo constant modifications to reflect this new knowledge. These modifications may influence the results of statistical applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. Here, we investigate to what degree modifications of the Gene Ontology (GO) impact these statistical analyses for both experimental and simulated data. The analysis is based on new measures for the stability of result sets and considers different ontology and annotation changes. Our results show that past changes in the GO are non-uniformly distributed over different branches of the ontology. Considering the semantic relatedness of significant categories in analysis results allows a more realistic stability assessment for functional enrichment studies. We observe that the results of term-enrichment analyses tend to be surprisingly stable despite changes in ontology and annotation.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-08-16
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-01-01
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065
Influence of Deployment on the Use of E-Cigarettes in the United States Army and Air Force
2018-03-22
the "Tobacco Use Among Service Members" survey sponsored by the Murtha Cancer Center and the Postgraduate Dental School of the Uniformed Services...the study period, and were willing to complete the survey . The survey was voluntary and anonymous; no personally identifiable information was...collected about participants. Statistical analysis of the data obtained from this survey database was performed using SAS. The independent variables were
Spatial statistical analysis of tree deaths using airborne digital imagery
NASA Astrophysics Data System (ADS)
Chang, Ya-Mei; Baddeley, Adrian; Wallace, Jeremy; Canci, Michael
2013-04-01
High resolution digital airborne imagery offers unprecedented opportunities for observation and monitoring of vegetation, providing the potential to identify, locate and track individual vegetation objects over time. Analytical tools are required to quantify relevant information. In this paper, locations of trees over a large area of native woodland vegetation were identified using morphological image analysis techniques. Methods of spatial point process statistics were then applied to estimate the spatially-varying tree death risk, and to show that it is significantly non-uniform. [Tree deaths over the area were detected in our previous work (Wallace et al., 2008).] The study area is a major source of ground water for the city of Perth, and the work was motivated by the need to understand and quantify vegetation changes in the context of water extraction and drying climate. The influence of hydrological variables on tree death risk was investigated using spatial statistics (graphical exploratory methods, spatial point pattern modelling and diagnostics).
NASA Astrophysics Data System (ADS)
Gong, Maozhen
Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.
Statistical distributions of avalanche size and waiting times in an inter-sandpile cascade model
NASA Astrophysics Data System (ADS)
Batac, Rene; Longjas, Anthony; Monterola, Christopher
2012-02-01
Sandpile-based models have successfully shed light on key features of nonlinear relaxational processes in nature, particularly the occurrence of fat-tailed magnitude distributions and exponential return times, from simple local stress redistributions. In this work, we extend the existing sandpile paradigm into an inter-sandpile cascade, wherein the avalanches emanating from a uniformly-driven sandpile (first layer) is used to trigger the next (second layer), and so on, in a successive fashion. Statistical characterizations reveal that avalanche size distributions evolve from a power-law p(S)≈S-1.3 for the first layer to gamma distributions p(S)≈Sαexp(-S/S0) for layers far away from the uniformly driven sandpile. The resulting avalanche size statistics is found to be associated with the corresponding waiting time distribution, as explained in an accompanying analytic formulation. Interestingly, both the numerical and analytic models show good agreement with actual inventories of non-uniformly driven events in nature.
Luster measurements of lips treated with lipstick formulations.
Yadav, Santosh; Issa, Nevine; Streuli, David; McMullen, Roger; Fares, Hani
2011-01-01
In this study, digital photography in combination with image analysis was used to measure the luster of several lipstick formulations containing varying amounts and types of polymers. A weighed amount of lipstick was applied to a mannequin's lips and the mannequin was illuminated by a uniform beam of a white light source. Digital images of the mannequin were captured with a high-resolution camera and the images were analyzed using image analysis software. Luster analysis was performed using Stamm (L(Stamm)) and Reich-Robbins (L(R-R)) luster parameters. Statistical analysis was performed on each luster parameter (L(Stamm) and L(R-R)), peak height, and peak width. Peak heights for lipstick formulation containing 11% and 5% VP/eicosene copolymer were statistically different from those of the control. The L(Stamm) and L(R-R) parameters for the treatment containing 11% VP/eicosene copolymer were statistically different from these of the control. Based on the results obtained in this study, we are able to determine whether a polymer is a good pigment dispersant and contributes to visually detected shine of a lipstick upon application. The methodology presented in this paper could serve as a tool for investigators to screen their ingredients for shine in lipstick formulations.
NASA Astrophysics Data System (ADS)
Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae
2012-09-01
This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.
Katz, Brian G.; Krulikas, Richard K.
1979-01-01
Water samples from wells in Nassau and Suffolk Counties were analyzed for chloride and nitrate. Two samples were collected at each well; one was analyzed by the U.S. Geological Survey, the other by a laboratory in the county from which the sample was taken. Results were compared statistically by paired-sample t-test to indicate the degree of uniformity among laboratory results. Chloride analyses from one of the three county laboratories differed significantly (0.95 confidence level) from that of a Geological Survey laboratory. For nitrate analyses, a significant difference (0.95 confidence level) was noted between results from two of the three county laboratories and the Geological Survey laboratory. The lack of uniformity among results reported by the participating laboratories indicates a need for continuing participation in a quality-assurance program and exercise of strong quality control from time of sample collection through analysis so that differences can be evaluated. (Kosco-USGS)
14 CFR 25 - Traffic and Capacity Elements
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCEEDINGS) ECONOMIC REGULATIONS UNIFORM SYSTEM OF ACCOUNTS AND REPORTS FOR LARGE CERTIFICATED AIR CARRIERS... forth in section 19—Uniform Classification of Operating Statistics. (b) Carriers submitting Schedule T-100 shall use magnetic computer tape or IBM compatible disk for transmitting the prescribed data to...
14 CFR Section 25 - Traffic and Capacity Elements
Code of Federal Regulations, 2013 CFR
2013-01-01
... PROCEEDINGS) ECONOMIC REGULATIONS UNIFORM SYSTEM OF ACCOUNTS AND REPORTS FOR LARGE CERTIFICATED AIR CARRIERS... forth in section 19—Uniform Classification of Operating Statistics. (b) Carriers submitting Schedule T-100 shall use magnetic computer tape or IBM compatible disk for transmitting the prescribed data to...
A statistical model for radar images of agricultural scenes
NASA Technical Reports Server (NTRS)
Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.
1982-01-01
The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.
PASSALI, D.; CARUSO, G.; ARIGLIANO, L.C.; PASSALI, F.M.; BELLUSSI, L.
2012-01-01
SUMMARY Obstructive sleep apnoea syndrome (OSAS) results from upper airway collapse during sleep. It represents an increasingly recognized pathology associated with many diseases. Herein, we describe a database for patients with OSAS. This has different goals: to facilitate good uniformity in clinical assessment, to allow the use of the application even by non-ENT specialists, to evaluate the results of medical and/or surgical treatments and to enable a statistical meta-analysis derived from the data collected in many OSAS medical centres. PMID:23093815
Geography of end-Cretaceous marine bivalve extinctions
NASA Technical Reports Server (NTRS)
Raup, David M.; Jablonski, David
1993-01-01
Analysis of the end-Cretaceous mass extinction, based on 3514 occurrences of 340 genera of marine bivalves (Mollusca), suggests that extinction intensities were uniformly global; no latitudinal gradients or other geographic patterns are detected. Elevated extinction intensities in some tropical areas are entirely a result of the distribution of one extinct group of highly specialized bivalves, the rudists. When rudists are omitted, intensities at those localities are statistically indistinguishable from those of both the rudist-free tropics and extratropical localities.
A Test Strategy for High Resolution Image Scanners.
1983-10-01
for multivariate analysis. Holt, Richart and Winston, Inc., New York. Graybill , F.A., 1961: An introduction to linear statistical models . SVolume I...i , j i -(7) 02 1 )2 y 4n .i ij 13 The linear estimation model for the polynomial coefficients can be set up as - =; =(8) with T = ( x’ . . X-nn "X...Resolution Image Scanner MTF Geometrical and radiometric performance Dynamic range, linearity , noise - Dynamic scanning errors Response uniformity Skewness of
NASA Technical Reports Server (NTRS)
Litchford, Ron J.; Jeng, San-Mou
1992-01-01
The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Li, Huanjie; Nickerson, Lisa D; Nichols, Thomas E; Gao, Jia-Hong
2017-03-01
Two powerful methods for statistical inference on MRI brain images have been proposed recently, a non-stationary voxelation-corrected cluster-size test (CST) based on random field theory and threshold-free cluster enhancement (TFCE) based on calculating the level of local support for a cluster, then using permutation testing for inference. Unlike other statistical approaches, these two methods do not rest on the assumptions of a uniform and high degree of spatial smoothness of the statistic image. Thus, they are strongly recommended for group-level fMRI analysis compared to other statistical methods. In this work, the non-stationary voxelation-corrected CST and TFCE methods for group-level analysis were evaluated for both stationary and non-stationary images under varying smoothness levels, degrees of freedom and signal to noise ratios. Our results suggest that, both methods provide adequate control for the number of voxel-wise statistical tests being performed during inference on fMRI data and they are both superior to current CSTs implemented in popular MRI data analysis software packages. However, TFCE is more sensitive and stable for group-level analysis of VBM data. Thus, the voxelation-corrected CST approach may confer some advantages by being computationally less demanding for fMRI data analysis than TFCE with permutation testing and by also being applicable for single-subject fMRI analyses, while the TFCE approach is advantageous for VBM data. Hum Brain Mapp 38:1269-1280, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Gillen, Rebecca; Firbank, Michael J.; Lloyd, Jim; O'Brien, John T.
2015-09-01
This study investigated if the appearance and diagnostic accuracy of HMPAO brain perfusion SPECT images could be improved by using CT-based attenuation and scatter correction compared with the uniform attenuation correction method. A cohort of subjects who were clinically categorized as Alzheimer’s Disease (n=38 ), Dementia with Lewy Bodies (n=29 ) or healthy normal controls (n=30 ), underwent SPECT imaging with Tc-99m HMPAO and a separate CT scan. The SPECT images were processed using: (a) correction map derived from the subject’s CT scan or (b) the Chang uniform approximation for correction or (c) no attenuation correction. Images were visually inspected. The ratios between key regions of interest known to be affected or spared in each condition were calculated for each correction method, and the differences between these ratios were evaluated. The images produced using the different corrections were noted to be visually different. However, ROI analysis found similar statistically significant differences between control and dementia groups and between AD and DLB groups regardless of the correction map used. We did not identify an improvement in diagnostic accuracy in images which were corrected using CT-based attenuation and scatter correction, compared with those corrected using a uniform correction map.
Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.
Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep
2016-04-01
This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk < 1.0 as "incapable" (1). A C pk > 1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.
Open Source Tools for Seismicity Analysis
NASA Astrophysics Data System (ADS)
Powers, P.
2010-12-01
The spatio-temporal analysis of seismicity plays an important role in earthquake forecasting and is integral to research on earthquake interactions and triggering. For instance, the third version of the Uniform California Earthquake Rupture Forecast (UCERF), currently under development, will use Epidemic Type Aftershock Sequences (ETAS) as a model for earthquake triggering. UCERF will be a "living" model and therefore requires robust, tested, and well-documented ETAS algorithms to ensure transparency and reproducibility. Likewise, as earthquake aftershock sequences unfold, real-time access to high quality hypocenter data makes it possible to monitor the temporal variability of statistical properties such as the parameters of the Omori Law and the Gutenberg Richter b-value. Such statistical properties are valuable as they provide a measure of how much a particular sequence deviates from expected behavior and can be used when assigning probabilities of aftershock occurrence. To address these demands and provide public access to standard methods employed in statistical seismology, we present well-documented, open-source JavaScript and Java software libraries for the on- and off-line analysis of seismicity. The Javascript classes facilitate web-based asynchronous access to earthquake catalog data and provide a framework for in-browser display, analysis, and manipulation of catalog statistics; implementations of this framework will be made available on the USGS Earthquake Hazards website. The Java classes, in addition to providing tools for seismicity analysis, provide tools for modeling seismicity and generating synthetic catalogs. These tools are extensible and will be released as part of the open-source OpenSHA Commons library.
Stupák, Ivan; Pavloková, Sylvie; Vysloužil, Jakub; Dohnal, Jiří; Čulen, Martin
2017-11-23
Biorelevant dissolution instruments represent an important tool for pharmaceutical research and development. These instruments are designed to simulate the dissolution of drug formulations in conditions most closely mimicking the gastrointestinal tract. In this work, we focused on the optimization of dissolution compartments/vessels for an updated version of the biorelevant dissolution apparatus-Golem v2. We designed eight compartments of uniform size but different inner geometry. The dissolution performance of the compartments was tested using immediate release caffeine tablets and evaluated by standard statistical methods and principal component analysis. Based on two phases of dissolution testing (using 250 and 100 mL of dissolution medium), we selected two compartment types yielding the highest measurement reproducibility. We also confirmed a statistically ssignificant effect of agitation rate and dissolution volume on the extent of drug dissolved and measurement reproducibility.
Stochastic Growth of Ion Cyclotron And Mirror Waves In Earth's Magnetosheath
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Grubits, K. A.
2001-01-01
Electromagnetic ion cyclotron and mirror waves in Earth's magnetosheath are bursty, have widely variable fields, and are unexpectedly persistent, properties difficult to reconcile with uniform secular growth. Here it is shown for specific periods that stochastic growth theory (SGT) quantitatively accounts for the functional form of the wave statistics and qualitatively explains the wave properties. The wave statistics are inconsistent with uniform secular growth or self-organized criticality, but nonlinear processes sometimes play a role at high fields. The results show SGT's relevance near marginal stability and suggest that it is widely relevant to space and astrophysical plasmas.
The resolving power of in vitro genotoxicity assays for cigarette smoke particulate matter.
Scott, K; Saul, J; Crooks, I; Camacho, O M; Dillon, D; Meredith, C
2013-06-01
In vitro genotoxicity assays are often used to compare tobacco smoke particulate matter (PM) from different cigarettes. The quantitative aspect of the comparisons requires appropriate statistical methods and replication levels, to support the interpretation in terms of power and significance. This paper recommends a uniform statistical analysis for the Ames test, mouse lymphoma mammalian cell mutation assay (MLA) and the in vitro micronucleus test (IVMNT); involving a hierarchical decision process with respect to slope, fixed effect and single dose comparisons. With these methods, replication levels of 5 (Ames test TA98), 4 (Ames test TA100), 10 (Ames test TA1537), 6 (MLA) and 4 (IVMNT) resolved a 30% difference in PM genotoxicity. Copyright © 2013 Elsevier Ltd. All rights reserved.
Global Statistics of Bolides in the Terrestrial Atmosphere
NASA Astrophysics Data System (ADS)
Chernogor, L. F.; Shevelyov, M. B.
2017-06-01
Purpose: Evaluation and analysis of distribution of the number of meteoroid (mini asteroid) falls as a function of glow energy, velocity, the region of maximum glow altitude, and geographic coordinates. Design/methodology/approach: The satellite database on the glow of 693 mini asteroids, which were decelerated in the terrestrial atmosphere, has been used for evaluating basic meteoroid statistics. Findings: A rapid decrease in the number of asteroids with increasing of their glow energy is confirmed. The average speed of the celestial bodies is equal to about 17.9 km/s. The altitude of maximum glow most often equals to 30-40 km. The distribution law for a number of meteoroids entering the terrestrial atmosphere in longitude and latitude (after excluding the component in latitudinal dependence due to the geometry) is approximately uniform. Conclusions: Using a large enough database of measurements, the meteoroid (mini asteroid) statistics has been evaluated.
2013-10-15
statistic,” in Artifical Intelligence and Statistics (AISTATS), 2013. [6] ——, “Detecting activity in graphs via the Graph Ellipsoid Scan Statistic... Artifical Intelligence and Statistics (AISTATS), 2013. [8] ——, “Near-optimal anomaly detection in graphs using Lovász Extended Scan Statistic,” in Neural...networks,” in Artificial Intelligence and Statistics (AISTATS), 2010. 11 [11] D. Aldous, “The random walk construction of uniform spanning trees and
NASA Astrophysics Data System (ADS)
Amirnasr, Elham
It is widely recognized that nonwoven basis weight non-uniformity affects various properties of nonwovens. However, few studies can be found in this topic. The development of uniformity definition and measurement methods and the study of their impact on various web properties such as filtration properties and air permeability would be beneficial both in industrial applications and in academia. They can be utilized as a quality control tool and would provide insights about nonwoven behaviors that cannot be solely explained by average values. Therefore, for quantifying nonwoven web basis weight uniformity we purse to develop an optical analytical tool. The quadrant method and clustering analysis was utilized in an image analysis scheme to help define "uniformity" and its spatial variation. Implementing the quadrant method in an image analysis system allows the establishment of a uniformity index that can be used to quantify the degree of uniformity. Clustering analysis has also been modified and verified using uniform and random simulated images with known parameters. Number of clusters and cluster properties such as cluster size, member and density was determined. We also utilized this new measurement method to evaluate uniformity of nonwovens produced with different processes and investigated impacts of uniformity on filtration and permeability. The results of quadrant method shows that uniformity index computed from quadrant method demonstrate a good range for non-uniformity of nonwoven webs. Clustering analysis is also been applied on reference nonwoven with known visual uniformity. From clustering analysis results, cluster size is promising to be used as uniformity parameter. It is been shown that non-uniform nonwovens has provide lager cluster size than uniform nonwovens. It was been tried to find a relationship between web properties and uniformity index (as a web characteristic). To achieve this, filtration properties, air permeability, solidity and uniformity index of meltblown and spunbond samples was measured. Results for filtration test show some deviation between theoretical and experimental filtration efficiency by considering different types of fiber diameter. This deviation can occur due to variation in basis weight non-uniformity. So an appropriate theory is required to predict the variation of filtration efficiency with respect to non-uniformity of nonwoven filter media. And the results for air permeability test showed that uniformity index determined by quadrant method and measured properties have some relationship. In the other word, air permeability decreases as uniformity index on nonwoven web increase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.; Selvin, S.; Close, E.R.
In studying geographic disease distributions, one normally compares rates of arbitrarily defined geographic subareas (e.g. census tracts), thereby sacrificing the geographic detail of the original data. The sparser the data, the larger the subareas must be in order to calculate stable rates. This dilemma is avoided with the technique of Density Equalizing Map Projections (DEMP). Boundaries of geographic subregions are adjusted to equalize population density over the entire study area. Case locations plotted on the transformed map should have a uniform distribution if the underlying disease-rates are constant. On the transformed map, the statistical analysis of the observed distribution ismore » greatly simplified. Even for sparse distributions, the statistical significance of a supposed disease cluster can be reliably calculated. The present report describes the first successful application of the DEMP technique to a sizeable ``real-world`` data set of epidemiologic interest. An improved DEMP algorithm [GUSE93, CLOS94] was applied to a data set previously analyzed with conventional techniques [SATA90, REYN91]. The results from the DEMP analysis and a conventional analysis are compared.« less
The Search for Solar Gravity-Mode Oscillations: an Analysis Using ULYSSES Magnetic Field Data
NASA Astrophysics Data System (ADS)
Denison, David G. T.; Walden, Andrew T.
1999-04-01
In 1995 Thomson, Maclennon, and Lanzerotti (TML) reported on work where they carried out a time-series analysis of energetic particle fluxes measured by Ulysses and Voyager 2 and concluded that solar g-mode oscillations had been detected. The approach is based on finding significant peaks in spectra using a statistical F-test. Using three sets of 2048 hourly averages of Ulysses magnetic field magnitude data, and the same multitaper spectral estimation techniques, we obtain, on average, nine coincidences with the lines listed in the TML paper. We could not reject the hypothesis that the F-test peaks we obtained are uniformly distributed, and further statistical computations show that a sequence of uniformly distributed lines generated on the frequency grid would have, on average, nine coincidences with the lines of TML. Further, we find that a time series generated from a model with a smooth spectrum of the same form as derived from the Ulysses magnetic field magnitude data and having no true spectral lines above 2 μHz, when subjected to the multitaper F-tests, gives rise to essentially the same number of ``identified'' lines and coincident frequencies as found with our Ulysses data. We conclude that our average nine coincidences with the lines found by TML can arise by mechanisms wholly unconnected with the existence of real physical spectral lines and hence find no firm evidence that g-modes can be detected in our sample of magnetic field data.
Shedge, Sapana V; Zhou, Xiuwen; Wesolowski, Tomasz A
2014-09-01
Recent application of the Frozen-Density Embedding Theory based continuum model of the solvent, which is used for calculating solvatochromic shifts in the UV/Vis range, are reviewed. In this model, the solvent is represented as a non-uniform continuum taking into account both the statistical nature of the solvent and specific solute-solvent interactions. It offers, therefore, a computationally attractive alternative to methods in which the solvent is described at atomistic level. The evaluation of the solvatochromic shift involves only two calculations of excitation energy instead of at least hundreds needed to account for inhomogeneous broadening. The present review provides a detailed graphical analysis of the key quantities of this model: the average charge density of the solvent (<ρB>) and the corresponding Frozen-Density Embedding Theory derived embedding potential for coumarin 153.
NASA Astrophysics Data System (ADS)
Chang, Po-Han; Liu, Shang-Yi; Lan, Yu-Bing; Tsai, Yi-Chen; You, Xue-Qian; Li, Chia-Shuo; Huang, Kuo-You; Chou, Ang-Sheng; Cheng, Tsung-Chin; Wang, Juen-Kai; Wu, Chih-I.
2017-04-01
In this work, graphene-methylammonium lead iodide (MAPbI3) perovskite hybrid phototransistors fabricated by sequential vapor deposition are demonstrated. Ultrahigh responsivity of 1.73 × 107 A W-1 and detectivity of 2 × 1015 Jones are achieved, with extremely high effective quantum efficiencies of about 108% in the visible range (450-700 nm). This excellent performance is attributed to the ultra-flat perovskite films grown by vapor deposition on the graphene sheets. The hybrid structure of graphene covered with uniform perovskite has high exciton separation ability under light exposure, and thus efficiently generates photocurrents. This paper presents photoluminescence (PL) images along with statistical analysis used to study the photo-induced exciton behavior. Both uniform and dramatic PL intensity quenching has been observed over entire measured regions, consistently demonstrating excellent exciton separation in the devices.
Chang, Po-Han; Liu, Shang-Yi; Lan, Yu-Bing; Tsai, Yi-Chen; You, Xue-Qian; Li, Chia-Shuo; Huang, Kuo-You; Chou, Ang-Sheng; Cheng, Tsung-Chin; Wang, Juen-Kai; Wu, Chih-I
2017-01-01
In this work, graphene-methylammonium lead iodide (MAPbI3) perovskite hybrid phototransistors fabricated by sequential vapor deposition are demonstrated. Ultrahigh responsivity of 1.73 × 107 A W−1 and detectivity of 2 × 1015 Jones are achieved, with extremely high effective quantum efficiencies of about 108% in the visible range (450–700 nm). This excellent performance is attributed to the ultra-flat perovskite films grown by vapor deposition on the graphene sheets. The hybrid structure of graphene covered with uniform perovskite has high exciton separation ability under light exposure, and thus efficiently generates photocurrents. This paper presents photoluminescence (PL) images along with statistical analysis used to study the photo-induced exciton behavior. Both uniform and dramatic PL intensity quenching has been observed over entire measured regions, consistently demonstrating excellent exciton separation in the devices. PMID:28422117
Muselík, Jan; Franc, Aleš; Doležel, Petr; Goněc, Roman; Krondlová, Anna; Lukášová, Ivana
2014-09-01
The article describes the development and production of tablets using direct compression of powder mixtures. The aim was to describe the impact of filler particle size and the time of lubricant addition during mixing on content uniformity according to the Good Manufacturing Practice (GMP) process validation requirements. Processes are regulated by complex directives, forcing the producers to validate, using sophisticated methods, the content uniformity of intermediates as well as final products. Cutting down of production time and material, shortening of analyses, and fast and reliable statistic evaluation of results can reduce the final price without affecting product quality. The manufacturing process of directly compressed tablets containing the low dose active pharmaceutical ingredient (API) warfarin, with content uniformity passing validation criteria, is used as a model example. Statistic methods have proved that the manufacturing process is reproducible. Methods suitable for elucidation of various properties of the final blend, e.g., measurement of electrostatic charge by Faraday pail and evaluation of mutual influences of researched variables by partial least square (PLS) regression, were used. Using these methods, it was proved that the filler with higher particle size increased the content uniformity of both blends and the ensuing tablets. Addition of the lubricant, magnesium stearate, during the blending process improved the content uniformity of blends containing the filler with larger particles. This seems to be caused by reduced sampling error due to the suppression of electrostatic charge.
Yang, Chan; Xu, Bing; Zhang, Zhi-Qiang; Wang, Xin; Shi, Xin-Yuan; Fu, Jing; Qiao, Yan-Jiang
2016-10-01
Blending uniformity is essential to ensure the homogeneity of Chinese medicine formula particles within each batch. This study was based on the blending process of ebony spray dried powder and dextrin(the proportion of dextrin was 10%),in which the analysis of near infrared (NIR) diffuse reflectance spectra was collected from six different sampling points in combination with moving window F test method in order to assess the blending uniformity of the blending process.The method was validated by the changes of citric acid content determined by the HPLC. The results of moving window F test method showed that the ebony spray dried powder and dextrin was homogeneous during 200-300 r and was segregated during 300-400 r. An advantage of this method is that the threshold value is defined statistically, not empirically and thus does not suffer from threshold ambiguities in common with the moving block standard deviatiun (MBSD). And this method could be employed to monitor other blending process of Chinese medicine powders on line. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Iwakoshi, Takehisa; Hirota, Osamu
2014-10-01
This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.
Paing, Htoo W; Marcus, R Kenneth
2018-03-12
A practical method for preparation of solution residue samples for analysis utilizing the ambient desorption liquid sampling-atmospheric pressure glow discharge optical emission spectroscopy (AD-LS-APGD-OES) microplasma is described. Initial efforts involving placement of solution aliquots in wells drilled into copper substrates, proved unsuccessful. A design-of-experiment (DOE) approach was carried out to determine influential factors during sample deposition including solution volume, solute concentration, number of droplets deposited, and the solution matrix. These various aspects are manifested in the mass of analyte deposited as well as the size/shape of the product residue. Statistical analysis demonstrated that only those initial attributes were significant factors towards the emission response of the analyte. Various approaches were investigated to better control the location/uniformity of the deposited sample. Three alternative substrates, a glass slide, a poly(tetrafluoro)ethylene (PTFE) sheet, and a polydimethylsiloxane (PDMS)-coated glass slide, were evaluated towards the microplasma analytical performance. Co-deposition with simple organic dyes provided an accurate means of determining the location of the analyte with only minor influence on emission responses. The PDMS-coated glass provided the best performance by virtue of its providing a uniform spatial distribution of the residue material. This uniformity yielded an improved limits of detection by approximately 22× for 20 μL and 4 x for 2 μL over the other two substrates. While they operate by fundamentally different processes, this choice of substrate is not restricted to the LS-APGD, but may also be applicable to other AD methods such as DESI, DART, or LIBS. Further developments will be directed towards a field-deployable ambient desorption OES source for quantitative analysis of microvolume solution residues of nuclear forensics importance.
Tian, Ming; Gao, Yi; Liu, Yi; Liao, Yiliang; Hedin, Nyle E.; Fong, Hao
2008-01-01
Objective To investigate the reinforcement of Bis-GMA/TEGDMA dental resins (without conventional glass filler) and composites (with conventional glass filler) with various mass fractions of nano fibrillar silicate (FS). Methods Three dispersion methods were studied to separate the silanized FS as nano-scaled single crystals and uniformly distribute them into dental matrices. The photo-curing behaviors of the Bis-GMA/TEGDMA/FS resins were monitored in situ by RT-NIR to study the photopolymerization rate and the vinyl double bond conversion. Mechanical properties (flexural strength, elastic modulus and work of fracture) of the nano FS reinforced resins/composites were tested, and Analysis of Variance (ANOVA) was used for the statistical analysis of the acquired data. The morphology of nano FS and the representative fracture surfaces of its reinforced resins/composites were examined by SEM/TEM. Results Impregnation of small mass fractions (1 % and 2.5 %) of nano FS into Bis-GMA/TEGDMA (50/50 mass ratio) dental resins/composites improved the mechanical properties substantially. Larger mass fraction of impregnation (7.5 %), however, did not further improve the mechanical properties (one way ANOVA, P > 0.05) and may even reduce the mechanical properties. The high degree of separation and uniform distribution of nano FS into dental resins/composites was a challenge. Impregnation of nano FS into dental resins/composites could result in two opposite effects: a reinforcing effect due to the highly separated and uniformly distributed nano FS single crystals, or a weakening effect due to the formation of FS agglomerates/particles. Significance Uniform distribution of highly separated nano FS single crystals into dental resins/composites could significantly improve the mechanical properties of the resins/composites. PMID:17572485
Tian, Ming; Gao, Yi; Liu, Yi; Liao, Yiliang; Hedin, Nyle E; Fong, Hao
2008-02-01
To investigate the reinforcement of Bis-GMA/TEGDMA dental resins (without conventional glass filler) and composites (with conventional glass filler) with various mass fractions of nano fibrillar silicate (FS). Three dispersion methods were studied to separate the silanized FS as nano-scaled single crystals and uniformly distribute them into dental matrices. The photo-curing behaviors of the Bis-GMA/TEGDMA/FS resins were monitored in situ by RT-NIR to study the photopolymerization rate and the vinyl double bond conversion. Mechanical properties (flexural strength, elastic modulus and work-of-fracture) of the nano FS reinforced resins/composites were tested, and analysis of variance (ANOVA) was used for the statistical analysis of the acquired data. The morphology of nano FS and the representative fracture surfaces of its reinforced resins/composites were examined by SEM/TEM. Impregnation of small mass fractions (1% and 2.5%) of nano FS into Bis-GMA/TEGDMA (50/50 mass ratio) dental resins/composites improved the mechanical properties substantially. Larger mass fraction of impregnation (7.5%), however, did not further improve the mechanical properties (one way ANOVA, P>0.05) and may even reduce the mechanical properties. The high degree of separation and uniform distribution of nano FS into dental resins/composites was a challenge. Impregnation of nano FS into dental resins/composites could result in two opposite effects: a reinforcing effect due to the highly separated and uniformly distributed nano FS single crystals, or a weakening effect due to the formation of FS agglomerates/particles. Uniform distribution of highly separated nano FS single crystals into dental resins/composites could significantly improve the mechanical properties of the resins/composites.
Local conformity induced global oscillation
NASA Astrophysics Data System (ADS)
Li, Dong; Li, Wei; Hu, Gang; Zheng, Zhigang
2009-04-01
The game ‘rock-paper-scissors’ model, with the consideration of the effect of the psychology of conformity, is investigated. The interaction between each two agents is global, but the strategy of the conformity is local for individuals. In the statistical opinion, the probability of the appearance of each strategy is uniform. The dynamical analysis of this model indicates that the equilibrium state may lose its stability at a threshold and is replaced by a globally oscillating state. The global oscillation is induced by the local conformity, which is originated from the synchronization of individual strategies.
Asymptotic modal analysis and statistical energy analysis
NASA Technical Reports Server (NTRS)
Dowell, Earl H.
1988-01-01
Statistical Energy Analysis (SEA) is defined by considering the asymptotic limit of Classical Modal Analysis, an approach called Asymptotic Modal Analysis (AMA). The general approach is described for both structural and acoustical systems. The theoretical foundation is presented for structural systems, and experimental verification is presented for a structural plate responding to a random force. Work accomplished subsequent to the grant initiation focusses on the acoustic response of an interior cavity (i.e., an aircraft or spacecraft fuselage) with a portion of the wall vibrating in a large number of structural modes. First results were presented at the ASME Winter Annual Meeting in December, 1987, and accepted for publication in the Journal of Vibration, Acoustics, Stress and Reliability in Design. It is shown that asymptotically as the number of acoustic modes excited becomes large, the pressure level in the cavity becomes uniform except at the cavity boundaries. However, the mean square pressure at the cavity corner, edge and wall is, respectively, 8, 4, and 2 times the value in the cavity interior. Also it is shown that when the portion of the wall which is vibrating is near a cavity corner or edge, the response is significantly higher.
Use of the wavelet transform to investigate differences in brain PET images between patient groups
NASA Astrophysics Data System (ADS)
Ruttimann, Urs E.; Unser, Michael A.; Rio, Daniel E.; Rawlings, Robert R.
1993-06-01
Suitability of the wavelet transform was studied for the analysis of glucose utilization differences between subject groups as displayed in PET images. To strengthen statistical inference, it was of particular interest investigating the tradeoff between signal localization and image decomposition into uncorrelated components. This tradeoff is shown to be controlled by wavelet regularity, with the optimal compromise attained by third-order orthogonal spline wavelets. Testing of the ensuing wavelet coefficients identified only about 1.5% as statistically different (p < .05) from noise, which then served to resynthesize the difference images by the inverse wavelet transform. The resulting images displayed relatively uniform, noise-free regions of significant differences with, due to the good localization maintained by the wavelets, very little reconstruction artifacts.
Quantifying spatial and temporal trends in beach-dune volumetric changes using spatial statistics
NASA Astrophysics Data System (ADS)
Eamer, Jordan B. R.; Walker, Ian J.
2013-06-01
Spatial statistics are generally underutilized in coastal geomorphology, despite offering great potential for identifying and quantifying spatial-temporal trends in landscape morphodynamics. In particular, local Moran's Ii provides a statistical framework for detecting clusters of significant change in an attribute (e.g., surface erosion or deposition) and quantifying how this changes over space and time. This study analyzes and interprets spatial-temporal patterns in sediment volume changes in a beach-foredune-transgressive dune complex following removal of invasive marram grass (Ammophila spp.). Results are derived by detecting significant changes in post-removal repeat DEMs derived from topographic surveys and airborne LiDAR. The study site was separated into discrete, linked geomorphic units (beach, foredune, transgressive dune complex) to facilitate sub-landscape scale analysis of volumetric change and sediment budget responses. Difference surfaces derived from a pixel-subtraction algorithm between interval DEMs and the LiDAR baseline DEM were filtered using the local Moran's Ii method and two different spatial weights (1.5 and 5 m) to detect statistically significant change. Moran's Ii results were compared with those derived from a more spatially uniform statistical method that uses a simpler student's t distribution threshold for change detection. Morphodynamic patterns and volumetric estimates were similar between the uniform geostatistical method and Moran's Ii at a spatial weight of 5 m while the smaller spatial weight (1.5 m) consistently indicated volumetric changes of less magnitude. The larger 5 m spatial weight was most representative of broader site morphodynamics and spatial patterns while the smaller spatial weight provided volumetric changes consistent with field observations. All methods showed foredune deflation immediately following removal with increased sediment volumes into the spring via deposition at the crest and on lobes in the lee, despite erosion on the stoss slope and dune toe. Generally, the foredune became wider by landward extension and the seaward slope recovered from erosion to a similar height and form to that of pre-restoration despite remaining essentially free of vegetation.
Robertson, David S; Prevost, A Toby; Bowden, Jack
2016-09-30
Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal
2015-09-01
Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.
Distribution of water quality parameters in Dhemaji district, Assam (India).
Buragohain, Mridul; Bhuyan, Bhabajit; Sarma, H P
2010-07-01
The primary objective of this study is to present a statistically significant water quality database of Dhemaji district, Assam (India) with special reference to pH, fluoride, nitrate, arsenic, iron, sodium and potassium. 25 water samples collected from different locations of five development blocks in Dhemaji district have been studied separately. The implications presented are based on statistical analyses of the raw data. Normal distribution statistics and reliability analysis (correlation and covariance matrix) have been employed to find out the distribution pattern, localisation of data, and other related information. Statistical observations show that all the parameters under investigation exhibit non uniform distribution with a long asymmetric tail either on the right or left side of the median. The width of the third quartile was consistently found to be more than the second quartile for each parameter. Differences among mean, mode and median, significant skewness and kurtosis value indicate that the distribution of various water quality parameters in the study area is widely off normal. Thus, the intrinsic water quality is not encouraging due to unsymmetrical distribution of various water quality parameters in the study area.
Automatic Classification of Medical Text: The Influence of Publication Form1
Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.
1988-01-01
Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.
Chen, Qi; Zhao, Yong; Wu, Weidong; Xu, Tao; Fong, Hao
2012-01-01
Objective To investigate the reinforcement of Bis-GMA/TEGDMA dental resins (without conventional glass filler) and the corresponding composites (with conventional glass filler)containing vari ed mass fractions of halloysite nanotubes (HNTs). Methods Three dispersion methods were studied to separate the silanized halloysite as individual HNTs and to uniformly distribute them into dental matrices. Photopolymerization induced volumetric shrinkage was measured by using a mercury dilatometer. Real time near infrared spectroscopy was adopted to study the degree of vinyl double bond conversion and the photopolymerization rate. Mechanical properties of the composites were tested by a universal mechanical testing machine. Analysis of Variance (ANOVA) was used for the statistical analysis of the acquired data. Morphologies of halloysite/HNTs and representative fracture surfaces of the reinforced dental resins/composites were examined by SEM and TEM. Results Impregnation of small mass fractions (e.g., 1% and 2.5%) of the silanized HNTs in Bis-GMA/TEGDMA dental resins/composites improved mechanical properties significantly; however; large mass fractions (e.g., 5%) of impregnation did not further improve the mechanical properties. The impregnation of HNTs into dental resins/composites could result in two opposite effects: the reinforcing effect due to the highly separated and uniformly distributed HNTs, and the weakening effect due to the formation of HNT agglomerates/particles. Significance Uniform distribution of a small amount of well-separated silanized HNTs into Bis-GMA/TEGDMA dental resins/composites could result in substantial improvements on mechanical properties. PMID:22796038
Chen, Qi; Zhao, Yong; Wu, Weidong; Xu, Tao; Fong, Hao
2012-10-01
To investigate the reinforcement of Bis-GMA/TEGDMA dental resins (without conventional glass filler) and the corresponding composites (with conventional glass filler) containing varied mass fractions of halloysite nanotubes (HNTs). Three dispersion methods were studied to separate the silanized halloysite as individual HNTs and to uniformly distribute them into dental matrices. Photopolymerization induced volumetric shrinkage was measured by using a mercury dilatometer. Real time near infrared spectroscopy was adopted to study the degree of vinyl double bond conversion and the photopolymerization rate. Mechanical properties of the composites were tested by a universal mechanical testing machine. Analysis of variance (ANOVA) was used for the statistical analysis of the acquired data. Morphologies of halloysite/HNTs and representative fracture surfaces of the reinforced dental resins/composites were examined by SEM and TEM. Impregnation of small mass fractions (e.g., 1% and 2.5%) of the silanized HNTs in Bis-GMA/TEGDMA dental resins/composites improved mechanical properties significantly; however; large mass fractions (e.g., 5%) of impregnation did not further improve the mechanical properties. The impregnation of HNTs into dental resins/composites could result in two opposite effects: the reinforcing effect due to the highly separated and uniformly distributed HNTs, and the weakening effect due to the formation of HNT agglomerates/particles. Uniform distribution of a small amount of well-separated silanized HNTs into Bis-GMA/TEGDMA dental resins/composites could result in substantial improvements on mechanical properties. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Patra, Bishnubrata; Peng, Chien-Chung; Liao, Wei-Hao; Lee, Chau-Hwang; Tung, Yi-Chung
2016-02-01
Three-dimensional (3D) tumor spheroid possesses great potential as an in vitro model to improve predictive capacity for pre-clinical drug testing. In this paper, we combine advantages of flow cytometry and microfluidics to perform drug testing and analysis on a large number (5000) of uniform sized tumor spheroids. The spheroids are formed, cultured, and treated with drugs inside a microfluidic device. The spheroids can then be harvested from the device without tedious operation. Due to the ample cell numbers, the spheroids can be dissociated into single cells for flow cytometry analysis. Flow cytometry provides statistical information in single cell resolution that makes it feasible to better investigate drug functions on the cells in more in vivo-like 3D formation. In the experiments, human hepatocellular carcinoma cells (HepG2) are exploited to form tumor spheroids within the microfluidic device, and three anti-cancer drugs: Cisplatin, Resveratrol, and Tirapazamine (TPZ), and their combinations are tested on the tumor spheroids with two different sizes. The experimental results suggest the cell culture format (2D monolayer vs. 3D spheroid) and spheroid size play critical roles in drug responses, and also demonstrate the advantages of bridging the two techniques in pharmaceutical drug screening applications.
Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Meegan, Charles A.
1997-01-01
This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Duggan, Katherine A; McDevitt, Elizabeth A; Whitehurst, Lauren N; Mednick, Sara C
2018-01-01
Although napping has received attention because of its associations with health and use as a method to understand the function of sleep, to our knowledge no study has systematically and statistically assessed reasons for napping. Using factor analysis, we determined the underlying structure of reasons for napping in diverse undergraduates (N = 430, 59% female) and examined their relationships with self-reported sleep, psychological health, and physical health. The five reasons for napping can be summarized using the acronym DREAM (Dysregulative, Restorative, Emotional, Appetitive, and Mindful). Only Emotional reasons for napping were uniformly related to lower well-being. The use of factor analysis raises possibilities for future research, including examining the stability, structure, and psychological and physical health processes related to napping throughout the lifespan.
To nap, perchance to DREAM: A factor analysis of college students’ self-reported reasons for napping
Duggan, Katherine A.; McDevitt, Elizabeth A.; Whitehurst, Lauren N.; Mednick, Sara C.
2017-01-01
Although napping has received attention because of its associations with health and use as a method to understand the function of sleep, to our knowledge no study has systematically and statistically assessed reasons for napping. Using factor analysis, we determined the underlying structure of reasons for napping in diverse undergraduates (N=430, 59% female) and examined their relationships with self-reported sleep, psychological, and physical health. The 5 reasons for napping can be summarized using the acronym DREAM (Dysregulative, Restorative, Emotional, Appetitive, and Mindful). Only Emotional reasons for napping were uniformly related to lower well-being. The use of factor analysis raises possibilities for future research, including examining the stability, structure, and psychological and physical health processes related to napping throughout the lifespan. PMID:27347727
Evaluating the uniformity of color spaces and performance of color difference formulae
NASA Astrophysics Data System (ADS)
Lian, Yusheng; Liao, Ningfang; Wang, Jiajia; Tan, Boneng; Liu, Zilong
2010-11-01
Using small color difference data sets (Macadam ellipses dataset and RIT-DuPont suprathreshold color difference ellipses dataset), and large color difference data sets (Munsell Renovation Data and OSA Uniform Color Scales dataset), the uniformity of several color spaces and performance of color difference formulae based on these color spaces are evaluated. The color spaces used are CIELAB, DIN99d, IPT, and CIECAM02-UCS. It is found that the uniformity of lightness is better than saturation and hue. Overall, for all these color spaces, the uniformity in the blue area is inferior to the other area. The uniformity of CIECAM02-UCS is superior to the other color spaces for the whole color-difference range from small to large. The uniformity of CIELAB and IPT for the large color difference data sets is better than it for the small color difference data sets, but the DIN99d is opposite. Two common performance factors (PF/3 and STRESS) and the statistical F-test are calculated to test the performance of color difference formula. The results show that the performance of color difference formulae based on these four color spaces is consistent with the uniformity of these color spaces.
Shang, Eric K; Nathan, Derek P; Sprinkle, Shanna R; Fairman, Ronald M; Bavaria, Joseph E; Gorman, Robert C; Gorman, Joseph H; Jackson, Benjamin M
2013-09-10
Wall stress calculated using finite element analysis has been used to predict rupture risk of aortic aneurysms. Prior models often assume uniform aortic wall thickness and fusiform geometry. We examined the effects of including local wall thickness, intraluminal thrombus, calcifications, and saccular geometry on peak wall stress (PWS) in finite element analysis of descending thoracic aortic aneurysms. Computed tomographic angiography of descending thoracic aortic aneurysms (n=10 total, 5 fusiform and 5 saccular) underwent 3-dimensional reconstruction with custom algorithms. For each aneurysm, an initial model was constructed with uniform wall thickness. Experimental models explored the addition of variable wall thickness, calcifications, and intraluminal thrombus. Each model was loaded with 120 mm Hg pressure, and von Mises PWS was computed. The mean PWS of uniform wall thickness models was 410 ± 111 kPa. The imposition of variable wall thickness increased PWS (481 ± 126 kPa, P<0.001). Although the addition of calcifications was not statistically significant (506 ± 126 kPa, P=0.07), the addition of intraluminal thrombus to variable wall thickness (359 ± 86 kPa, P ≤ 0.001) reduced PWS. A final model incorporating all features also reduced PWS (368 ± 88 kPa, P<0.001). Saccular geometry did not increase diameter-normalized stress in the final model (77 ± 7 versus 67 ± 12 kPa/cm, P=0.22). Incorporation of local wall thickness can significantly increase PWS in finite element analysis models of thoracic aortic aneurysms. Incorporating variable wall thickness, intraluminal thrombus, and calcifications significantly impacts computed PWS of thoracic aneurysms; sophisticated models may, therefore, be more accurate in assessing rupture risk. Saccular aneurysms did not demonstrate a significantly higher normalized PWS than fusiform aneurysms.
A note about Gaussian statistics on a sphere
NASA Astrophysics Data System (ADS)
Chave, Alan D.
2015-11-01
The statistics of directional data on a sphere can be modelled either using the Fisher distribution that is conditioned on the magnitude being unity, in which case the sample space is confined to the unit sphere, or using the latitude-longitude marginal distribution derived from a trivariate Gaussian model that places no constraint on the magnitude. These two distributions are derived from first principles and compared. The Fisher distribution more closely approximates the uniform distribution on a sphere for a given small value of the concentration parameter, while the latitude-longitude marginal distribution is always slightly larger than the Fisher distribution at small off-axis angles for large values of the concentration parameter. Asymptotic analysis shows that the two distributions only become equivalent in the limit of large concentration parameter and very small off-axis angle.
Draborg, Eva; Andersen, Christian Kronborg
2006-01-01
Health technology assessment (HTA) has been used as input in decision making worldwide for more than 25 years. However, no uniform definition of HTA or agreement on assessment methods exists, leaving open the question of what influences the choice of assessment methods in HTAs. The objective of this study is to analyze statistically a possible relationship between methods of assessment used in practical HTAs, type of assessed technology, type of assessors, and year of publication. A sample of 433 HTAs published by eleven leading institutions or agencies in nine countries was reviewed and analyzed by multiple logistic regression. The study shows that outsourcing of HTA reports to external partners is associated with a higher likelihood of using assessment methods, such as meta-analysis, surveys, economic evaluations, and randomized controlled trials; and with a lower likelihood of using assessment methods, such as literature reviews and "other methods". The year of publication was statistically related to the inclusion of economic evaluations and shows a decreasing likelihood during the year span. The type of assessed technology was related to economic evaluations with a decreasing likelihood, to surveys, and to "other methods" with a decreasing likelihood when pharmaceuticals were the assessed type of technology. During the period from 1989 to 2002, no major developments in assessment methods used in practical HTAs were shown statistically in a sample of 433 HTAs worldwide. Outsourcing to external assessors has a statistically significant influence on choice of assessment methods.
Luminaire layout: Design and implementation
NASA Technical Reports Server (NTRS)
Both, A. J.
1994-01-01
The information contained in this report was presented during the discussion regarding guidelines for PAR uniformity in greenhouses. The data shows a lighting uniformity analysis in a research greenhouse for rose production at the Cornell University campus. The luminaire layout was designed using the computer program Lumen-Micro. After implementation of the design, accurate measurements were taken in the greenhouse and the uniformity analysis for both the design and implementation were compared. A study of several supplemental lighting installations resulted in the following recommendations: include only the actual growing area in the lighting uniformity analysis; for growing areas up to 20 square meters, take four measurements per square meter; for growing areas above 20 square meters, take one measurement per square meter; use one of the uniformity criteria and frequency graphs to compare lighting uniformity amongst designs; and design for uniformity criterion of a least 0.75 and the fraction within +/- 15% of the average PAR value should be close to one.
NASA Astrophysics Data System (ADS)
Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.
2015-03-01
Crude oil is becoming more corrosive with higher sulfur concentration, chloride concentration, and acidity. The increasing presence of naphthenic acids in oils with various environmental conditions at temperatures between 150°C and 400°C can lead to different internal degradation morphologies in refineries that are uniform, non-uniform, or localized pitting. Improved corrosion measurement technology is needed to better quantify the integrity risk associated with refining crude oils of higher acid concentration. This paper first reports a consolidated review of corrosion inspection technology to establish the foundation for structural health monitoring of localized internal corrosion in high temperature piping. An approach under investigation is to employ flexible ultrasonic thin-film piezoelectric transducer arrays fabricated by the sol-gel manufacturing process for monitoring localized internal corrosion at temperatures up to 400°C. A statistical analysis of sol-gel transducer measurement accuracy using various time of flight thickness calculation algorithms on a flat calibration block is demonstrated.
NASA Astrophysics Data System (ADS)
Shen, Jian; Liu, Shouhua; Shen, Zicai; Shao, Jianda; Fan, Zhengxiu
2006-03-01
A model for refractive index of stratified dielectric substrate was put forward according to theories of inhomogeneous coatings. The substrate was divided into surface layer, subsurface layer and bulk layer along the normal direction of its surface. Both the surface layer (separated into N1 sublayers of uniform thickness) and subsurface layer (separated into N2 sublayers of uniform thickness), whose refractive indices have different statistical distributions, are equivalent to inhomogeneous coatings, respectively. And theoretical deduction was carried out by employing characteristic matrix method of optical coatings. An example of mathematical calculation for optical properties of dielectric coatings had been presented. The computing results indicate that substrate subsurface defects can bring about additional bulk scattering and change propagation characteristic in thin film and substrate. Therefore, reflectance, reflective phase shift and phase difference of an assembly of coatings and substrate deviate from ideal conditions. The model will provide some beneficial theory directions for improving optical properties of dielectric coatings via substrate surface modification.
Extended analysis of Skylab experiment M558 data
NASA Technical Reports Server (NTRS)
Ukanwa, A. O.
1976-01-01
A careful review of the data from Skylab M558 was made in an effort to explain the apparent anomaly of the existence of radial concentration gradients whereas none should bave been observed. The very close modelling of the experimental axial concentration profiles by the unsteady-state one-dimensional solution of Fick's Law of self-diffusion in liquid zinc, and the condition of initial uniform concentration in the radioactive pellet portion of the experimental specimens would have precluded the appearance of such radial concentration gradients. Statistical analyses were used to test the significance of the observed deviation from radial-concentration homogeneity. A student t-distribution test of significance showed that, at 90% or even at 80% level of significance, there were no significant deviations from uniformity in radial concentrations. It was also concluded that the two likely causes of any deviation that existed were the zinc to zinc-65 bonding procedure and surface phenomena such as surface tension and capillary action.
NASA Astrophysics Data System (ADS)
Kozakevych, Roman B.; Korobeinyk, Alina V.; Bolbukh, Yulia M.; Tertykh, Valentin A.; Mikhalovska, Lyuba I.; Zienkiewicz-Strzałka, Malgorzlata; Deryło-Marczewska, Anna
2018-03-01
The silica and copper oxide nanoparticles were embedded into the polyvinyl chloride film and obtained filled composites were tested as a catalyst in the reaction of the NO release from appropriate biomolecules. Obtained materials were characterized using scanning electron, atomic-force microscopies and thermomechanical analysis. It has been shown that the introduced particles are distributed uniformly in the polymeric matrix of hybrid composite and such film produces a significant amount of NO when reacts with S-nitrosothiols. At the same time, the unfilled polyvinyl chloride film had no statistically significant catalytic activity.
Barteneva, Natasha S; Vorobjev, Ivan A
2018-01-01
In this paper, we review some of the recent advances in cellular heterogeneity and single-cell analysis methods. In modern research of cellular heterogeneity, there are four major approaches: analysis of pooled samples, single-cell analysis, high-throughput single-cell analysis, and lately integrated analysis of cellular population at a single-cell level. Recently developed high-throughput single-cell genetic analysis methods such as RNA-Seq require purification step and destruction of an analyzed cell often are providing a snapshot of the investigated cell without spatiotemporal context. Correlative analysis of multiparameter morphological, functional, and molecular information is important for differentiation of more uniform groups in the spectrum of different cell types. Simplified distributions (histograms and 2D plots) can underrepresent biologically significant subpopulations. Future directions may include the development of nondestructive methods for dissecting molecular events in intact cells, simultaneous correlative cellular analysis of phenotypic and molecular features by hybrid technologies such as imaging flow cytometry, and further progress in supervised and non-supervised statistical analysis algorithms.
Three-dimensional cellular deformation analysis with a two-photon magnetic manipulator workstation.
Huang, Hayden; Dong, Chen Y; Kwon, Hyuk-Sang; Sutin, Jason D; Kamm, Roger D; So, Peter T C
2002-04-01
The ability to apply quantifiable mechanical stresses at the microscopic scale is critical for studying cellular responses to mechanical forces. This necessitates the use of force transducers that can apply precisely controlled forces to cells while monitoring the responses noninvasively. This paper describes the development of a micromanipulation workstation integrating two-photon, three-dimensional imaging with a high-force, uniform-gradient magnetic manipulator. The uniform-gradient magnetic field applies nearly uniform forces to a large cell population, permitting statistical quantification of select molecular responses to mechanical stresses. The magnetic transducer design is capable of exerting over 200 pN of force on 4.5-microm-diameter paramagnetic particles and over 800 pN on 5.0-microm ferromagnetic particles. These forces vary within +/-10% over an area 500 x 500 microm2. The compatibility with the use of high numerical aperture (approximately 1.0) objectives is an integral part of the workstation design allowing submicron-resolution, three-dimensional, two-photon imaging. Three-dimensional analyses of cellular deformation under localized mechanical strain are reported. These measurements indicate that the response of cells to large focal stresses may contain three-dimensional global deformations and show the suitability of this workstation to further studying cellular response to mechanical stresses.
A Bayesian Approach to the Paleomagnetic Conglomerate Test
NASA Astrophysics Data System (ADS)
Heslop, David; Roberts, Andrew P.
2018-02-01
The conglomerate test has served the paleomagnetic community for over 60 years as a means to detect remagnetizations. The test states that if a suite of clasts within a bed have uniformly random paleomagnetic directions, then the conglomerate cannot have experienced a pervasive event that remagnetized the clasts in the same direction. The current form of the conglomerate test is based on null hypothesis testing, which results in a binary "pass" (uniformly random directions) or "fail" (nonrandom directions) outcome. We have recast the conglomerate test in a Bayesian framework with the aim of providing more information concerning the level of support a given data set provides for a hypothesis of uniformly random paleomagnetic directions. Using this approach, we place the conglomerate test in a fully probabilistic framework that allows for inconclusive results when insufficient information is available to draw firm conclusions concerning the randomness or nonrandomness of directions. With our method, sample sets larger than those typically employed in paleomagnetism may be required to achieve strong support for a hypothesis of random directions. Given the potentially detrimental effect of unrecognized remagnetizations on paleomagnetic reconstructions, it is important to provide a means to draw statistically robust data-driven inferences. Our Bayesian analysis provides a means to do this for the conglomerate test.
Hydrophobicity diversity in globular and nonglobular proteins measured with the Gini index.
Carugo, Oliviero
2017-12-01
Amino acids and their properties are variably distributed in proteins and different compositions determine all protein features, ranging from solubility to stability and functionality. Gini index, a tool to estimate distribution uniformity, is widely used in macroeconomics and has numerous statistical applications. Here, Gini index is used to analyze the distribution of hydrophobicity in proteins and to compare hydrophobicity distribution in globular and intrinsically disordered proteins. Based on the analysis of carefully selected high-quality data sets of proteins extracted from the Protein Data Bank (http://www.rcsb.org) and from the DisProt database (http://www.disprot.org/), it is observed that hydrophobicity is distributed in a more diverse way in intrinsically disordered proteins than in folded and soluble globular proteins. This correlates with the observation that the amino acid composition deviates from the uniformity (estimate with the Shannon and the Gini-Simpson indices) more in intrinsically disordered proteins than in globular and soluble proteins. Although statistical tools tike the Gini index have received little attention in molecular biology, these results show that they allow one to estimate sequence diversity and that they are useful to delineate trends that can hardly be described, otherwise, in simple and concise ways. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
2008 Post-Election Voting Survey of Federal Civilians Overseas: Statistical Methodology Report
2009-08-01
Westat, Inc. developed weights for this survey. Westat performed data collection and editing. DMDC’s Survey Technology Branch, under the guidance...Summary The Uniformed and Overseas Citizens Absentee Voting Act of 1986 (UOCAVA), 42 USC 1973ff, permits members of the Uniformed Services and...assess the impact of the FVAP’s efforts to simplify and ease the process of voting absentee , (3) to evaluate other progress made to facilitate voting
Angeler, David G; Viedma, Olga; Moreno, José M
2009-11-01
Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.
Statistics on Blindness in the Model Reporting Area 1969-1970.
ERIC Educational Resources Information Center
Kahn, Harold A.; Moorhead, Helen B.
Presented in the form of 30 tables are statistics on blindness in 16 states which have agreed to uniform definitions and procedures to improve reliability of data regarding blind persons. The data indicates that rates of blindness were generally higher for nonwhites than for whites with the ratio ranging from almost 10 for glaucoma to minimal for…
NASA Astrophysics Data System (ADS)
Ingargiola, Antonino; Assanelli, Mattia; Gallivanoni, Andrea; Rech, Ivan; Ghioni, Massimo; Cova, Sergio
2009-05-01
Improving SPAD performances, such as dark count rate and quantum efficiency, without degrading the photontiming jitter is a challenging task that requires a clear understanding of the physical mechanisms involved. In this paper we investigate the contribution of the avalanche buildup statistics and the lateral avalanche propagation to the photon-timing jitter in silicon SPAD devices. Recent works on the buildup statistics focused on the uniform electric field case, however these results can not be applied to Si SPAD devices in which field profile is far from constant. We developed a 1-D Monte Carlo (MC) simulator using the real non-uniform field profiles derived from Secondary Ion Mass Spectroscopy (SIMS) measurements. Local and non-local models for impact ionization phenomena were considered. The obtained results, in particular the mean multiplication rate and jitter of the buildup filament, allowed us to simulate the statistical spread of the avalanche current on the device active area. We included space charge effects and a detailed lumped model for the external electronics and parasitics. We found that, in agreement with some experimental evidences, the avalanche buildup contribution to the total timing jitter is non-negligible in our devices. Moreover the lateral propagation gives an additional contribution that can explain the increasing trend of the photon-timing jitter with the comparator threshold.
Pandey, Pinki; Dixit, Alok; Tanwar, Aparna; Sharma, Anuradha; Mittal, Sanjeev
2014-07-01
Our study presents a new deparaffinizing and hematoxylin and eosin (H and E) staining method that involves the use of easily available, nontoxic and eco-friendly liquid diluted dish washing soap (DWS) by completely eliminating expensive and hazardous xylene and alcohol from deparaffinizing and rehydration prior to staining, staining and from dehydration prior to mounting. The aim was to evaluate and compare the quality of liquid DWS treated xylene and alcohol free (XAF) sections with that of the conventional H and E sections. A total of 100 paraffin embedded tissue blocks from different tissues were included. From each tissue block, one section was stained with conventional H and E (normal sections) and the other with XAF H and E (soapy sections) staining method. Slides were scored using five parameters: Nuclear, cytoplasmic, clarity, uniformity, and crispness of staining. Z-test was used for statistical analysis. Soapy sections scored better for cytoplasmic (90%) and crisp staining (95%) with a statistically significant difference. Whereas for uniformity of staining, normal sections (88%) scored over soapy sections (72%) (Z = 2.82, P < 0.05). For nuclear (90%) and clarity of staining (90%) total scored favored soapy sections, but the difference was not statistically significant. About 84% normal sections stained adequately for diagnosis when compared with 86% in soapy sections (Z = 0.396, P > 0.05). Liquid DWS is a safe and efficient alternative to xylene and alcohol in deparaffinization and routine H and E staining procedure. We are documenting this project that can be used as a model for other histology laboratories.
Pandey, Pinki; Dixit, Alok; Tanwar, Aparna; Sharma, Anuradha; Mittal, Sanjeev
2014-01-01
Introduction: Our study presents a new deparaffinizing and hematoxylin and eosin (H and E) staining method that involves the use of easily available, nontoxic and eco-friendly liquid diluted dish washing soap (DWS) by completely eliminating expensive and hazardous xylene and alcohol from deparaffinizing and rehydration prior to staining, staining and from dehydration prior to mounting. The aim was to evaluate and compare the quality of liquid DWS treated xylene and alcohol free (XAF) sections with that of the conventional H and E sections. Materials and Methods: A total of 100 paraffin embedded tissue blocks from different tissues were included. From each tissue block, one section was stained with conventional H and E (normal sections) and the other with XAF H and E (soapy sections) staining method. Slides were scored using five parameters: Nuclear, cytoplasmic, clarity, uniformity, and crispness of staining. Z-test was used for statistical analysis. Results: Soapy sections scored better for cytoplasmic (90%) and crisp staining (95%) with a statistically significant difference. Whereas for uniformity of staining, normal sections (88%) scored over soapy sections (72%) (Z = 2.82, P < 0.05). For nuclear (90%) and clarity of staining (90%) total scored favored soapy sections, but the difference was not statistically significant. About 84% normal sections stained adequately for diagnosis when compared with 86% in soapy sections (Z = 0.396, P > 0.05). Conclusion: Liquid DWS is a safe and efficient alternative to xylene and alcohol in deparaffinization and routine H and E staining procedure. We are documenting this project that can be used as a model for other histology laboratories. PMID:25328332
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, Samuel L., E-mail: samuel.brady@stjude.org; Shulkin, Barry L.
2015-02-15
Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET imagesmore » were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.« less
Spatial analysis of relative humidity during ungauged periods in a mountainous region
NASA Astrophysics Data System (ADS)
Um, Myoung-Jin; Kim, Yeonjoo
2017-08-01
Although atmospheric humidity influences environmental and agricultural conditions, thereby influencing plant growth, human health, and air pollution, efforts to develop spatial maps of atmospheric humidity using statistical approaches have thus far been limited. This study therefore aims to develop statistical approaches for inferring the spatial distribution of relative humidity (RH) for a mountainous island, for which data are not uniformly available across the region. A multiple regression analysis based on various mathematical models was used to identify the optimal model for estimating monthly RH by incorporating not only temperature but also location and elevation. Based on the regression analysis, we extended the monthly RH data from weather stations to cover the ungauged periods when no RH observations were available. Then, two different types of station-based data, the observational data and the data extended via the regression model, were used to form grid-based data with a resolution of 100 m. The grid-based data that used the extended station-based data captured the increasing RH trend along an elevation gradient. Furthermore, annual RH values averaged over the regions were examined. Decreasing temporal trends were found in most cases, with magnitudes varying based on the season and region.
Summary and Statistical Analysis of the First AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Park, Michael A.; Morgenstern, John M.
2014-01-01
A summary is provided for the First AIAA Sonic Boom Workshop held 11 January 2014 in conjunction with AIAA SciTech 2014. Near-field pressure signatures extracted from computational fluid dynamics solutions are gathered from nineteen participants representing three countries for the two required cases, an axisymmetric body and simple delta wing body. Structured multiblock, unstructured mixed-element, unstructured tetrahedral, overset, and Cartesian cut-cell methods are used by the participants. Participants provided signatures computed on participant generated and solution adapted grids. Signatures are also provided for a series of uniformly refined workshop provided grids. These submissions are propagated to the ground and loudness measures are computed. This allows the grid convergence of a loudness measure and a validation metric (dfference norm between computed and wind tunnel measured near-field signatures) to be studied for the first time. Statistical analysis is also presented for these measures. An optional configuration includes fuselage, wing, tail, flow-through nacelles, and blade sting. This full configuration exhibits more variation in eleven submissions than the sixty submissions provided for each required case. Recommendations are provided for potential improvements to the analysis methods and a possible subsequent workshop.
Murray, Natasha; Jansarikij, Suphachai; Olanratmanee, Phanthip; Maskhao, Pongsri; Souares, Aurélia; Wilder-Smith, Annelies; Kittayapong, Pattamaporn; Louis, Valérie R
2014-01-01
As current dengue control strategies have been shown to be largely ineffective in reducing dengue in school-aged children, novel approaches towards dengue control need to be studied. Insecticide-impregnated school uniforms represent an innovative approach with the theoretical potential to reduce dengue infections in school children. This study took place in the context of a randomised control trial (RCT) to test the effectiveness of permethrin-impregnated school uniforms (ISUs) for dengue prevention in Chachoengsao Province, Thailand. The objective was to assess the acceptability of ISUs among parents, teachers, and principals of school children involved in the trial. Quantitative and qualitative tools were used in a mixed methods approach. Class-clustered randomised samples of school children enrolled in the RCT were selected and their parents completed 321 self-administered questionnaires. Descriptive statistics and logistic regression were used to analyse the quantitative data. Focus group discussions and individual semi-structured interviews were conducted with parents, teachers, and principals. Qualitative data analysis involved content analysis with coding and thematic development. The knowledge and experience of dengue was substantial. The acceptability of ISUs was high. Parents (87.3%; 95% CI 82.9-90.8) would allow their child to wear an ISU and 59.9% (95% CI 53.7-65.9) of parents would incur additional costs for an ISU over a normal uniform. This was significantly associated with the total monthly income of a household and the educational level of the respondent. Parents (62.5%; 95% CI 56.6-68.1) indicated they would be willing to recommend ISUs to other parents. Acceptability of the novel tool of ISUs was high as defined by the lack of concern along with the willingness to pay and recommend. Considering issues of effectiveness and scalability, assessing acceptability of ISUs over time is recommended.
Murray, Natasha; Jansarikij, Suphachai; Olanratmanee, Phanthip; Maskhao, Pongsri; Souares, Aurélia; Wilder-Smith, Annelies; Kittayapong, Pattamaporn; Louis, Valérie R.
2014-01-01
Background As current dengue control strategies have been shown to be largely ineffective in reducing dengue in school-aged children, novel approaches towards dengue control need to be studied. Insecticide-impregnated school uniforms represent an innovative approach with the theoretical potential to reduce dengue infections in school children. Objectives This study took place in the context of a randomised control trial (RCT) to test the effectiveness of permethrin-impregnated school uniforms (ISUs) for dengue prevention in Chachoengsao Province, Thailand. The objective was to assess the acceptability of ISUs among parents, teachers, and principals of school children involved in the trial. Methodology Quantitative and qualitative tools were used in a mixed methods approach. Class-clustered randomised samples of school children enrolled in the RCT were selected and their parents completed 321 self-administered questionnaires. Descriptive statistics and logistic regression were used to analyse the quantitative data. Focus group discussions and individual semi-structured interviews were conducted with parents, teachers, and principals. Qualitative data analysis involved content analysis with coding and thematic development. Results The knowledge and experience of dengue was substantial. The acceptability of ISUs was high. Parents (87.3%; 95% CI 82.9–90.8) would allow their child to wear an ISU and 59.9% (95% CI 53.7–65.9) of parents would incur additional costs for an ISU over a normal uniform. This was significantly associated with the total monthly income of a household and the educational level of the respondent. Parents (62.5%; 95% CI 56.6–68.1) indicated they would be willing to recommend ISUs to other parents. Conclusions Acceptability of the novel tool of ISUs was high as defined by the lack of concern along with the willingness to pay and recommend. Considering issues of effectiveness and scalability, assessing acceptability of ISUs over time is recommended. PMID:25183313
Terranova, Claudio; Zen, Margherita
2018-01-01
National statistics on female homicide could be a useful tool to evaluate the phenomenon and plan adequate strategies to prevent and reduce this crime. The aim of the study is to contribute to the analysis of intentional female homicides in Italy by comparing Italian trends to German and United States trends from 2008 to 2014. This is a population study based on data deriving primarily from national and European statistical institutes, from the U.S. Federal Bureau of Investigation's Uniform Crime Reporting and from the National Center for Health Statistics. Data were analyzed in relation to trends and age by Chi-square test, Student's t-test and linear regression. Results show that female homicides, unlike male homicides, remained stable in the three countries. Regression analysis showed a higher risk for female homicide in all age groups in the U.S. Middle-aged women result at higher risk, and the majority of murdered women are killed by people they know. These results confirm previous findings and suggest the need to focus also in Italy on preventive strategies to reduce those precipitating factors linked to violence and present in the course of a relationship or within the family. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
A statistical evaluation and comparison of VISSR Atmospheric Sounder (VAS) data
NASA Technical Reports Server (NTRS)
Jedlovec, G. J.
1984-01-01
In order to account for the temporal and spatial discrepancies between the VAS and rawinsonde soundings, the rawinsonde data were adjusted to a common hour of release where the new observation time corresponded to the satellite scan time. Both the satellite and rawinsonde observations of the basic atmospheric parameters (T Td, and Z) were objectively analyzed to a uniform grid maintaining the same mesoscale structure in each data set. The performance of each retrieval algorithm in producing accurate and representative soundings was evaluated using statistical parameters such as the mean, standard deviation, and root mean square of the difference fields for each parameter and grid level. Horizontal structure was also qualitatively evaluated by examining atmospheric features on constant pressure surfaces. An analysis of the vertical structure of the atmosphere were also performed by looking at colocated and grid mean vertical profiles of both the satellite and rawinsonde data sets. Highlights of these results are presented.
Statistical analysis of strait time index and a simple model for trend and trend reversal
NASA Astrophysics Data System (ADS)
Chen, Kan; Jayaprakash, C.
2003-06-01
We analyze the daily closing prices of the Strait Time Index (STI) as well as the individual stocks traded in Singapore's stock market from 1988 to 2001. We find that the Hurst exponent is approximately 0.6 for both the STI and individual stocks, while the normal correlation functions show the random walk exponent of 0.5. We also investigate the conditional average of the price change in an interval of length T given the price change in the previous interval. We find strong correlations for price changes larger than a threshold value proportional to T; this indicates that there is no uniform crossover to Gaussian behavior. A simple model based on short-time trend and trend reversal is constructed. We show that the model exhibits statistical properties and market swings similar to those of the real market.
Fractal Analysis of Drainage Basins on Mars
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Marinova, M. M.; McGovern, P. J.; Clifford, S. M.
2002-01-01
We used statistical properties of drainage networks on Mars as a measure of martian landscape morphology and an indicator of landscape evolution processes. We utilize the Mars Orbiter Laser Altimeter (MOLA) data to construct digital elevation maps (DEMs) of several, mostly ancient, martian terrains. Drainage basins and channel networks are computationally extracted from DEMs and their structures are analyzed and compared to drainage networks extracted from terrestrial and lunar DEMs. We show that martian networks are self-affine statistical fractals with planar properties similar to terrestrial networks, but vertical properties similar to lunar networks. The uniformity of martian drainage density is between those for terrestrial and lunar landscapes. Our results are consistent with the roughening of ancient martian terrains by combination of rainfall-fed erosion and impacts, although roughening by other fluvial processes cannot be excluded. The notion of sustained rainfall in recent Mars history is inconsistent with our findings.
Carbon film coating of abutment surfaces: effect on the abutment screw removal torque.
Corazza, Pedro Henrique; de Moura Silva, Alecsandro; Cavalcanti Queiroz, José Renato; Salazar Marocho, Susana María; Bottino, Marco Antonia; Massi, Marcos; de Assunção e Souza, Rodrigo Othávio
2014-08-01
To evaluate the effect of diamond-like carbon (DLC) coating of prefabricated implant abutment on screw removal torque (RT) before and after mechanical cycling (MC). Fifty-four abutments for external-hex implants were divided among 6 groups (n = 9): S, straight abutment (control); SC, straight coated abutment; SCy, straight abutment and MC; SCCy, straight coated abutment and MC; ACy, angled abutment and MC; and ACCy, angled coated abutment and MC. The abutments were attached to the implants by a titanium screw. RT values were measured and registered. Data (in Newton centimeter) were analyzed with analysis of variance and Dunnet test (α = 0.05). RT values were significantly affected by MC (P = 0.001) and the interaction between DLC coating and MC (P = 0.038). SCy and ACy showed the lowest RT values, statistically different from the control. The abutment coated groups had no statistical difference compared with the control. Scanning electron microscopy analysis showed DLC film with a thickness of 3 μm uniformly coating the hexagonal abutment. DLC film deposited on the abutment can be used as an alternative procedure to reduce abutment screw loosening.
Dynamic Response of Layered TiB/Ti Functionally Graded Material Specimens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrd, Larry; Beberniss, Tim; Chapman, Ben
2008-02-15
This paper covers the dynamic response of rectangular (25.4x101.6x3.175 mm) specimens manufactured from layers of TiB/Ti. The layers contained volume fractions of TiB that varied from 0 to 85% and thus formed a functionally graded material. Witness samples of the 85% TiB material were also tested to provide a baseline for the statistical variability of the test techniques. Static and dynamic tests were performed to determine the in situ material properties and fundamental frequencies. Damping in the material/ fixture was also found from the dynamic response. These tests were simulated using composite beam theory which gave an analytical solution, andmore » using finite element analysis. The response of the 85% TiB specimens was found to be much more uniform than the functionally graded material and the dynamic response more uniform than the static response. A least squares analysis of the data using the analytical solutions were used to determine the elastic modulus and Poisson's ratio of each layer. These results were used to model the response in the finite element analysis. The results indicate that current analytical and numerical methods for modeling the material give similar and adequate predictions for natural frequencies if the measured property values were used. The models did not agree as well if the properties from the manufacturer or those of Hill and Linn were used.« less
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
Millimeter wave attenuation prediction using a piecewise uniform rain rate model
NASA Technical Reports Server (NTRS)
Persinger, R. R.; Stutzman, W. L.; Bostian, C. W.; Castle, R. E., Jr.
1980-01-01
A piecewise uniform rain rate distribution model is introduced as a quasi-physical model of real rain along earth-space millimeter wave propagation paths. It permits calculation of the total attenuation from specific attenuation in a simple fashion. The model predications are verified by comparison with direct attenuation measurements for several frequencies, elevation angles, and locations. Also, coupled with the Rice-Holmberg rain rate model, attenuation statistics are predicated from rainfall accumulation data.
Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying
2017-11-01
Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.
Indoor Soiling Method and Outdoor Statistical Risk Analysis of Photovoltaic Power Plants
NASA Astrophysics Data System (ADS)
Rajasekar, Vidyashree
This is a two-part thesis. Part 1 presents an approach for working towards the development of a standardized artificial soiling method for laminated photovoltaic (PV) cells or mini-modules. Construction of an artificial chamber to maintain controlled environmental conditions and components/chemicals used in artificial soil formulation is briefly explained. Both poly-Si mini-modules and a single cell mono-Si coupons were soiled and characterization tests such as I-V, reflectance and quantum efficiency (QE) were carried out on both soiled, and cleaned coupons. From the results obtained, poly-Si mini-modules proved to be a good measure of soil uniformity, as any non-uniformity present would not result in a smooth curve during I-V measurements. The challenges faced while executing reflectance and QE characterization tests on poly-Si due to smaller size cells was eliminated on the mono-Si coupons with large cells to obtain highly repeatable measurements. This study indicates that the reflectance measurements between 600-700 nm wavelengths can be used as a direct measure of soil density on the modules. Part 2 determines the most dominant failure modes of field aged PV modules using experimental data obtained in the field and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes of about 744 poly-Si glass/polymer frameless modules fielded for 18 years under the cold-dry climate of New York was evaluated. Defect chart, degradation rates (both string and module levels) and safety map were generated using the field measured data. A statistical reliability tool, FMECA that uses Risk Priority Number (RPN) is used to determine the dominant failure or degradation modes in the strings and modules by means of ranking and prioritizing the modes. This study on PV power plants considers all the failure and degradation modes from both safety and performance perspectives. The indoor and outdoor soiling studies were jointly performed by two Masters Students, Sravanthi Boppana and Vidyashree Rajasekar. This thesis presents the indoor soiling study, whereas the other thesis presents the outdoor soiling study. Similarly, the statistical risk analyses of two power plants (model J and model JVA) were jointly performed by these two Masters students. Both power plants are located at the same cold-dry climate, but one power plant carries framed modules and the other carries frameless modules. This thesis presents the results obtained on the frameless modules.
What is too much variation? The null hypothesis in small-area analysis.
Diehr, P; Cain, K; Connell, F; Volinn, E
1990-01-01
A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results. PMID:2312306
What is too much variation? The null hypothesis in small-area analysis.
Diehr, P; Cain, K; Connell, F; Volinn, E
1990-02-01
A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results.
Report to the Nation on Crime and Justice and Technical Appendix. Second Edition.
ERIC Educational Resources Information Center
Department of Justice, Washington, DC. Bureau of Justice Statistics.
This report on crime and justice aims to present statistical information in a format that can be easily understood by a nontechnical audience. It uses graphics and a nontechnical format to bring together data from the Bureau of Justice Statistics, the Federal Bureau of Investigation Uniform Crime Reports, the Bureau of the Census, the National…
PIVOT: platform for interactive analysis and visualization of transcriptomics data.
Zhu, Qin; Fisher, Stephen A; Dueck, Hannah; Middleton, Sarah; Khaladkar, Mugdha; Kim, Junhyong
2018-01-05
Many R packages have been developed for transcriptome analysis but their use often requires familiarity with R and integrating results of different packages requires scripts to wrangle the datatypes. Furthermore, exploratory data analyses often generate multiple derived datasets such as data subsets or data transformations, which can be difficult to track. Here we present PIVOT, an R-based platform that wraps open source transcriptome analysis packages with a uniform user interface and graphical data management that allows non-programmers to interactively explore transcriptomics data. PIVOT supports more than 40 popular open source packages for transcriptome analysis and provides an extensive set of tools for statistical data manipulations. A graph-based visual interface is used to represent the links between derived datasets, allowing easy tracking of data versions. PIVOT further supports automatic report generation, publication-quality plots, and program/data state saving, such that all analysis can be saved, shared and reproduced. PIVOT will allow researchers with broad background to easily access sophisticated transcriptome analysis tools and interactively explore transcriptome datasets.
Desta, Etaferahu Alamaw; Gebrie, Mignote Hailu; Dachew, Berihun Assefa
2015-01-01
Wearing uniforms help in the formation of professional identity in healthcare. It fosters a strong self image and professional identity which can lead to good confidence and better performance in nursing practice. However, most nurses in Ethiopia are not wearing nursing uniforms and the reasons remain unclear. Therefore, the aim of this research is to assess nurse uniform wearing practices among nurses and factors associated with such practice in hospitals in Northwest Ethiopia. A hospital based cross-sectional study was conducted from March to April, 2014 in five hospitals located in Northwest Ethiopia. A total 459 nurses participated in the study. Data was collected using a pre-tested self-administered questionnaire. Descriptive statistics were analyzed in order to characterize the study population. Bivariate and multiple logistic regression models were fitted. Odds ratios with 95 % confidence intervals were computed to identify factors associated with nursing uniform practice. Nurse uniform wearing practice was found to be 49.2 % of the total sample size. Around 35 % of the respondents that did not implement nurse uniform wearing practices stated that there was no specific uniform for nurses recommended by hospital management. In addition to this, nurse uniform wearing practices were positively associated with being female [AOR = 1.58, 95 % CI (1.02, 2.44)], studying nursing by choice [AOR =3.16, 95 % CI (2.03, 4.92)], and the appeal of nursing uniforms to nurses [AOR = 3.43 95 % CI (1.96, 5.98)]. Nurse uniform wearing practices were not exceptionally prevalent in Northwest Ethiopian hospitals. However, encouraging students to pursue interest-based careers and implementing a nurse uniform wearing policy may have the potential to improve such practices.
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling
Wood, John
2017-01-01
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080
Efficiency degradation due to tracking errors for point focusing solar collectors
NASA Technical Reports Server (NTRS)
Hughes, R. O.
1978-01-01
An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.
Wafer-scale solution-derived molecular gate dielectrics for low-voltage graphene electronics
NASA Astrophysics Data System (ADS)
Sangwan, Vinod K.; Jariwala, Deep; Everaerts, Ken; McMorrow, Julian J.; He, Jianting; Grayson, Matthew; Lauhon, Lincoln J.; Marks, Tobin J.; Hersam, Mark C.
2014-02-01
Graphene field-effect transistors are integrated with solution-processed multilayer hybrid organic-inorganic self-assembled nanodielectrics (SANDs). The resulting devices exhibit low-operating voltage (2 V), negligible hysteresis, current saturation with intrinsic gain >1.0 in vacuum (pressure < 2 × 10-5 Torr), and overall improved performance compared to control devices on conventional SiO2 gate dielectrics. Statistical analysis of the field-effect mobility and residual carrier concentration demonstrate high spatial uniformity of the dielectric interfacial properties and graphene transistor characteristics over full 3 in. wafers. This work thus establishes SANDs as an effective platform for large-area, high-performance graphene electronics.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2016-03-01
Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.
Statistics of indicated pressure in combustion engine.
NASA Astrophysics Data System (ADS)
Sitnik, L. J.; Andrych-Zalewska, M.
2016-09-01
The paper presents the classic form of pressure waveforms in burn chamber of diesel engine but based on strict analytical basis for amending the displacement volume. The pressure measurement results are obtained in the engine running on an engine dynamometer stand. The study was conducted by a 13-phase ESC test (European Stationary Cycle). In each test phase are archived 90 waveforms of pressure. As a result of extensive statistical analysis was found that while the engine is idling distribution of 90 value of pressure at any value of the angle of rotation of the crankshaft can be described uniform distribution. In the each point of characteristic of the engine corresponding to the individual phases of the ESC test, 90 of the pressure for any value of the angle of rotation of the crankshaft can be described as normal distribution. These relationships are verified using tests: Shapiro-Wilk, Jarque-Bera, Lilliefors, Anderson-Darling. In the following part, with each value of the crank angle, are obtain values of descriptive statistics for the pressure data. In its essence, are obtained a new way to approach the issue of pressure waveform analysis in the burn chamber of engine. The new method can be used to further analysis, especially the combustion process in the engine. It was found, e.g. a very large variances of pressure near the transition from compression to expansion stroke. This lack of stationarity of the process can be important both because of the emissions of exhaust gases and fuel consumption of the engine.
Cooper, Emily A.; Norcia, Anthony M.
2015-01-01
The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624
pcr: an R package for quality assessment, analysis and testing of qPCR data
Ahmed, Mahmoud
2018-01-01
Background Real-time quantitative PCR (qPCR) is a broadly used technique in the biomedical research. Currently, few different analysis models are used to determine the quality of data and to quantify the mRNA level across the experimental conditions. Methods We developed an R package to implement methods for quality assessment, analysis and testing qPCR data for statistical significance. Double Delta CT and standard curve models were implemented to quantify the relative expression of target genes from CT in standard qPCR control-group experiments. In addition, calculation of amplification efficiency and curves from serial dilution qPCR experiments are used to assess the quality of the data. Finally, two-group testing and linear models were used to test for significance of the difference in expression control groups and conditions of interest. Results Using two datasets from qPCR experiments, we applied different quality assessment, analysis and statistical testing in the pcr package and compared the results to the original published articles. The final relative expression values from the different models, as well as the intermediary outputs, were checked against the expected results in the original papers and were found to be accurate and reliable. Conclusion The pcr package provides an intuitive and unified interface for its main functions to allow biologist to perform all necessary steps of qPCR analysis and produce graphs in a uniform way. PMID:29576953
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Agueros, M. A.; Fournier, A.; Street, R.; Ofek, E.; Levitan, D. B.; PTF Collaboration
2013-01-01
Many current photometric, time-domain surveys are driven by specific goals such as searches for supernovae or transiting exoplanets, or studies of stellar variability. These goals in turn set the cadence with which individual fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several such sub-surveys are being conducted in parallel, leading to extremely non-uniform sampling over the survey's nearly 20,000 sq. deg. footprint. While the typical 7.26 sq. deg. PTF field has been imaged 20 times in R-band, ~2300 sq. deg. have been observed more than 100 times. We use the existing PTF data 6.4x107 light curves) to study the trade-off that occurs when searching for microlensing events when one has access to a large survey footprint with irregular sampling. To examine the probability that microlensing events can be recovered in these data, we also test previous statistics used on uniformly sampled data to identify variables and transients. We find that one such statistic, the von Neumann ratio, performs best for identifying simulated microlensing events. We develop a selection method using this statistic and apply it to data from all PTF fields with >100 observations to uncover a number of interesting candidate events. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large datasets, both of which will be useful to future wide-field, time-domain surveys such as the LSST.
Qin, Zong; Wang, Kai; Chen, Fei; Luo, Xiaobing; Liu, Sheng
2010-08-02
In this research, the condition for uniform lighting generated by array of LEDs with large view angle was studied. The luminous intensity distribution of LED is not monotone decreasing with view angle. A LED with freeform lens was designed as an example for analysis. In a system based on LEDs designed in house with a thickness of 20mm and rectangular arrangement, the condition for uniform lighting was derived and the analytical results demonstrated that the uniformity was not decreasing monotonously with the increasing of LED-to-LED spacing. The illuminance uniformities were calculated with Monte Carlo ray tracing simulations and the uniformity was found to increase with the increasing of certain LED-to-LED spacings anomalously. Another type of large view angle LED and different arrangements were discussed in addition. Both analysis and simulation results showed that the method is available for LED array lighting system design on the basis of large view angle LED..
2008 Post-Election Voting Survey of Overseas Citizens: Statistical Methodology Report
2009-08-01
Gorsak. Westat performed data collection and editing. DMDC’s Survey Technology Branch, under the guidance of Frederick Licari, Branch Chief, is...POST-ELECTION VOTING SURVEY OF OVERSEAS CITIZENS: STATISTICAL METHODOLOGY REPORT Executive Summary The Uniformed and Overseas Citizens Absentee ...ease the process of voting absentee , (3) to evaluate other progress made to facilitate voting participation, and (4) to identify any remaining
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Subimal; Das, Debasish; Kao, Shih-Chieh
Recent studies disagree on how rainfall extremes over India have changed in space and time over the past half century, as well as on whether the changes observed are due to global warming or regional urbanization. Although a uniform and consistent decrease in moderate rainfall has been reported, a lack of agreement about trends in heavy rainfall may be due in part to differences in the characterization and spatial averaging of extremes. Here we use extreme value theory to examine trends in Indian rainfall over the past half century in the context of long-term, low-frequency variability.We show that when generalizedmore » extreme value theory is applied to annual maximum rainfall over India, no statistically significant spatially uniform trends are observed, in agreement with previous studies using different approaches. Furthermore, our space time regression analysis of the return levels points to increasing spatial variability of rainfall extremes over India. Our findings highlight the need for systematic examination of global versus regional drivers of trends in Indian rainfall extremes, and may help to inform flood hazard preparedness and water resource management in the region.« less
Dosage variability of topical ocular hypotensive products: a densitometric assessment.
Gaynes, Bruce I; Singa, Ramesh M; Cao, Ying
2009-02-01
To ascertain consequence of variability in drop volume obtained from multiuse topical ocular hypotensive products in terms of uniformity of product dosage. Densitometric assessment of drop volume dispensed from 2 alternative bottle positions. All except one product demonstrated a statistically significant difference in drop volume when administered at either a 45-degree or 90-degree bottle angle (Student t test, P<0.001). Product-specific drop volume ranged from a nadir of 22.36 microL to a high of 53.54 microL depending on bottle angle of administration. Deviation in drop dose was directly proportional to variability in drop volume. Variability in per drop dosage was conspicuous among products with a coefficient of variation from 1.49% to 15.91%. In accordance with drop volume, all products demonstrated a statistically significant difference in drop dose at 45-degree versus 90-degree administration angles. Drop volume was found unrelated to drop uniformity (Spearman r=0.01987 and P=0.9463). Variability and lack of uniformity in drop dosage is clearly evident among select ocular hypotensive products and is related to angle of drop administration. Erratic dosage of topical ocular hypotensive therapy may contribute in part to therapeutic failure and/or toxicity.
NASA Astrophysics Data System (ADS)
Singh, Jitendra; Sekharan, Sheeba; Karmakar, Subhankar; Ghosh, Subimal; Zope, P. E.; Eldho, T. I.
2017-04-01
Mumbai, the commercial and financial capital of India, experiences incessant annual rain episodes, mainly attributable to erratic rainfall pattern during monsoons and urban heat-island effect due to escalating urbanization, leading to increasing vulnerability to frequent flooding. After the infamous episode of 2005 Mumbai torrential rains when only two rain gauging stations existed, the governing civic body, the Municipal Corporation of Greater Mumbai (MCGM) came forward with an initiative to install 26 automatic weather stations (AWS) in June 2006 (MCGM 2007), which later increased to 60 AWS. A comprehensive statistical analysis to understand the spatio-temporal pattern of rainfall over Mumbai or any other coastal city in India has never been attempted earlier. In the current study, a thorough analysis of available rainfall data for 2006-2014 from these stations was performed; the 2013-2014 sub-hourly data from 26 AWS was found useful for further analyses due to their consistency and continuity. Correlogram cloud indicated no pattern of significant correlation when we considered the closest to the farthest gauging station from the base station; this impression was also supported by the semivariogram plots. Gini index values, a statistical measure of temporal non-uniformity, were found above 0.8 in visible majority showing an increasing trend in most gauging stations; this sufficiently led us to conclude that inconsistency in daily rainfall was gradually increasing with progress in monsoon. Interestingly, night rainfall was lesser compared to daytime rainfall. The pattern-less high spatio-temporal variation observed in Mumbai rainfall data signifies the futility of independently applying advanced statistical techniques, and thus calls for simultaneous inclusion of physics-centred models such as different meso-scale numerical weather prediction systems, particularly the Weather Research and Forecasting (WRF) model.
Optimized multisectioned acoustic liners
NASA Technical Reports Server (NTRS)
Baumeister, K. J.
1979-01-01
New calculations show that segmenting is most efficient at high frequencies with relatively long duct lengths where the attenuation is low for both uniform and segmented liners. Statistical considerations indicate little advantage in using optimized liners with more than two segments while the bandwidth of an optimized two-segment liner is shown to be nearly equal to that of a uniform liner. Multielement liner calculations show a large degradation in performance due to changes in assumed input modal structure. Computer programs are used to generate theoretical attenuations for a number of liner configurations for liners in a rectangular duct with no mean flow. Overall, the use of optimized multisectioned liners fails to offer sufficient advantage over a uniform liner to warrant their use except in low frequency single mode application.
Understanding baseball team standings and streaks
NASA Astrophysics Data System (ADS)
Sire, C.; Redner, S.
2009-02-01
Can one understand the statistics of wins and losses of baseball teams? Are their consecutive-game winning and losing streaks self-reinforcing or can they be described statistically? We apply the Bradley-Terry model, which incorporates the heterogeneity of team strengths in a minimalist way, to answer these questions. Excellent agreement is found between the predictions of the Bradley-Terry model and the rank dependence of the average number team wins and losses in major-league baseball over the past century when the distribution of team strengths is taken to be uniformly distributed over a finite range. Using this uniform strength distribution, we also find very good agreement between model predictions and the observed distribution of consecutive-game team winning and losing streaks over the last half-century; however, the agreement is less good for the previous half-century. The behavior of the last half-century supports the hypothesis that long streaks are primarily statistical in origin with little self-reinforcing component. The data further show that the past half-century of baseball has been more competitive than the preceding half-century.
Daylighting Makes a Difference.
ERIC Educational Resources Information Center
Heschong, Lisa; Knecht, Carey
2002-01-01
Examined the role of daylight in student achievement in three schools and found a uniformly positive and statistically significant correlation between the presence of more daylight and better student test scores. Offers guidelines on designing daylit classrooms. (EV)
1994 summary : public transportation systems in Washington state
DOT National Transportation Integrated Search
1995-08-01
The Washington State Department of Transportation (WSDOT) prepares the annual : transit statistical summary. The intent for this summary is to provide uniform : data to transit providers, the Legislative Transportation Committee, and local : and regi...
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2012-12-01
Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).
Statistical analysis to assess automated level of suspicion scoring methods in breast ultrasound
NASA Astrophysics Data System (ADS)
Galperin, Michael
2003-05-01
A well-defined rule-based system has been developed for scoring 0-5 the Level of Suspicion (LOS) based on qualitative lexicon describing the ultrasound appearance of breast lesion. The purposes of the research are to asses and select one of the automated LOS scoring quantitative methods developed during preliminary studies in benign biopsies reduction. The study has used Computer Aided Imaging System (CAIS) to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses. The overall goal is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers (will require biopsy anyway). On complex cysts and fibroadenoma cases experienced radiologists were up to 50% less certain in true negatives than CAIS. Full correlation analysis was applied to determine which of the proposed LOS quantification methods serves CAIS accuracy the best. This paper presents current results of applying statistical analysis for automated LOS scoring quantification for breast masses with known biopsy results. It was found that First Order Ranking method yielded most the accurate results. The CAIS system (Image Companion, Data Companion software) is developed by Almen Laboratories and was used to achieve the results.
Quantitative assessment of human body shape using Fourier analysis
NASA Astrophysics Data System (ADS)
Friess, Martin; Rohlf, F. J.; Hsiao, Hongwei
2004-04-01
Fall protection harnesses are commonly used to reduce the number and severity of injuries. Increasing the efficiency of harness design requires the size and shape variation of the user population to be assessed as detailed and as accurately as possible. In light of the unsatisfactory performance of traditional anthropometry with respect to such assessments, we propose the use of 3D laser surface scans of whole bodies and the statistical analysis of elliptic Fourier coefficients. Ninety-eight male and female adults were scanned. Key features of each torso were extracted as a 3D curve along front, back and the thighs. A 3D extension of Elliptic Fourier analysis4 was used to quantify their shape through multivariate statistics. Shape change as a function of size (allometry) was predicted by regressing the coefficients onto stature, weight and hip circumference. Upper and lower limits of torso shape variation were determined and can be used to redefine the design of the harness that will fit most individual body shapes. Observed allometric changes are used for adjustments to the harness shape in each size. Finally, the estimated outline data were used as templates for a free-form deformation of the complete torso surface using NURBS models (non-uniform rational B-splines).
Revazov, A A; Pasekov, V P; Lukasheva, I D
1975-01-01
The paper deals with the distribution of genetic markers (systems ABO, MN, Rh (D), Hp, PTC) and a number of demographic (folding of arms, hand clasping, tongue rolling, right- and left-handedness, of the type of ear lobe, of the types of dermatoglyphic patterns) in the inhabitants of 6 villages in the Mezen District of the Archangelsk Region of the RSFSR (river Peosa basin). The data presented in this work were obtained in the course of examination of over 800 persons. Differences in the interpretation of the results of generally adopted methods of statistical analysis of samples from small populations are discussed. Among the systems analysed in one third of all the cases there was a statistically significant deviation from Hardy-Weinberg's ratios. For the MN blood groups and haptoglobins this was caused by the excess of heterozygotes. The test of Hardy--Weinberg's ratios at the level of two-loci phenotypes revealed no statistically significant deviations either in separate villages or in all the villages taken together. The analysis of heterogeneity with respect to markers inherited according to Mendel's law revealed statistically significant differences between villages in all the systems except haptoglobins. A considerable heterogeneity in the distribution of family names, the frequencies of some of them varying from village to village from 0 to 90%. Statistically significant differences between villages were shown for all the anthropogenetic characters except arm folding, hand clasping and right-left-handedness. Considering the uniformity of the environmental pressure in the region examined, the heterogeneity of the population studied is apparently associated with a random genetic differentiation (genetic drift) and, possibly, with the effect of the progenitor.
Forecasting infectious disease emergence subject to seasonal forcing.
Miller, Paige B; O'Dea, Eamon B; Rohani, Pejman; Drake, John M
2017-09-06
Despite high vaccination coverage, many childhood infections pose a growing threat to human populations. Accurate disease forecasting would be of tremendous value to public health. Forecasting disease emergence using early warning signals (EWS) is possible in non-seasonal models of infectious diseases. Here, we assessed whether EWS also anticipate disease emergence in seasonal models. We simulated the dynamics of an immunizing infectious pathogen approaching the tipping point to disease endemicity. To explore the effect of seasonality on the reliability of early warning statistics, we varied the amplitude of fluctuations around the average transmission. We proposed and analyzed two new early warning signals based on the wavelet spectrum. We measured the reliability of the early warning signals depending on the strength of their trend preceding the tipping point and then calculated the Area Under the Curve (AUC) statistic. Early warning signals were reliable when disease transmission was subject to seasonal forcing. Wavelet-based early warning signals were as reliable as other conventional early warning signals. We found that removing seasonal trends, prior to analysis, did not improve early warning statistics uniformly. Early warning signals anticipate the onset of critical transitions for infectious diseases which are subject to seasonal forcing. Wavelet-based early warning statistics can also be used to forecast infectious disease.
ERIC Educational Resources Information Center
Hoover, H. D.; Plake, Barbara
The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…
Rasch analysis of the hospital anxiety and depression scale among Chinese cataract patients.
Lin, Xianchai; Chen, Ziyan; Jin, Ling; Gao, Wuyou; Qu, Bo; Zuo, Yajing; Liu, Rongjiao; Yu, Minbin
2017-01-01
To analyze the validity of the Hospital Anxiety and Depression Scale (HADS) among Chinese cataract population. A total of 275 participants with unilateral or bilateral cataract were recruited to complete the Chinese version of HADS. The patients' demographic and ophthalmic characteristics were documented. Rasch analysis was conducted to examine the model fit statistics, the thresholds ordering of the polytomous items, targeting, person separation index and reliability, local dependency, unidimentionality, differential item functioning (DIF) and construct validity of the HADS individual and summary measures. Rasch analysis was performed on anxiety and depression subscales as well as HADS-Total score respectively. The items of original HADS-Anxiety, HADS-Depression and HADS-Total demonstrated evidence of misfit of the Rasch model. Removing items A7 for anxiety subscale and rescoring items D14 for depression subscale significantly improved Rasch model fit. A 12-item higher order total scale with further removal of D12 was found to fit the Rasch model. The modified items had ordered response thresholds. No uniform DIF was detected, whereas notable non-uniform DIF in high-ability group was found. The revised cut-off points were given for the modified anxiety and depression subscales. The modified version of HADS with HADS-A and HADS-D as subscale and HADS-T as a higher-order measure is a reliable and valid instrument that may be useful for assessing anxiety and depression states in Chinese cataract population.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-05-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.
1996 summary : public transportation systems in Washington state
DOT National Transportation Integrated Search
1997-09-01
The Washington State Department of Transportation (WSDOT) prepares the annual transit statistical summary. The intent for this summary, required by Section 35.58.2796 RCW, is to provide uniform data to transit providers, the Legislative Transportatio...
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
Uniform quantized electron gas
NASA Astrophysics Data System (ADS)
Høye, Johan S.; Lomba, Enrique
2016-10-01
In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T = 0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Application of Raman spectroscopy for on-line monitoring of low dose blend uniformity.
Hausman, Debra S; Cambron, R Thomas; Sakr, Adel
2005-07-14
On-line Raman spectroscopy was used to evaluate the effect of blending time on low dose, 1%, blend uniformity of azimilide dihydrochloride. An 8 qt blender was used for the experiments and instrumented with a Raman probe through the I-bar port. The blender was slowed to 6.75 rpm to better illustrate the blending process (normal speed is 25 rpm). Uniformity was reached after 20 min of blending at 6.75 rpm (135 revolutions or 5.4 min at 25 rpm). On-line Raman analysis of blend uniformity provided more benefits than traditional thief sampling and off-line analysis. On-line Raman spectroscopy enabled generating data rich blend profiles, due to the ability to collect a large number of samples during the blending process (sampling every 20s). In addition, the Raman blend profile was rapidly generated, compared to the lengthy time to complete a blend profile with thief sampling and off-line analysis. The on-line Raman blend uniformity results were also significantly correlated (p-value < 0.05) to the HPLC uniformity results of thief samples.
The Dundee Ready Education Environment Measure (DREEM): a review of its adoption and use.
Miles, Susan; Swift, Louise; Leinster, Sam J
2012-01-01
The Dundee Ready Education Environment Measure (DREEM) was published in 1997 as a tool to evaluate educational environments of medical schools and other health training settings and a recent review concluded that it was the most suitable such instrument. This study aimed to review the settings and purposes to which the DREEM has been applied and the approaches used to analyse and report it, with a view to guiding future users towards appropriate methodology. A systematic literature review was conducted using the Web of Knowledge databases of all articles reporting DREEM data between 1997 and 4 January 2011. The review found 40 publications, using data from 20 countries. DREEM is used in evaluation for diagnostic purposes, comparison between different groups and comparison with ideal/expected scores. A variety of non-parametric and parametric statistical methods have been applied, but their use is inconsistent. DREEM has been used internationally for different purposes and is regarded as a useful tool by users. However, reporting and analysis differs between publications. This lack of uniformity makes comparison between institutions difficult. Most users of DREEM are not statisticians and there is a need for informed guidelines on its reporting and statistical analysis.
An analysis of the effectiveness of two topical anesthetics.
Rosivack, R. G.; Koenigsberg, S. R.; Maxwell, K. C.
1990-01-01
This study compared the effectiveness of topical benzocaine 20%, lidocaine 5%, and a placebo in reducing the pain caused by needle insertion when the medicament was placed in the mucobuccal fold above the maxillary canine eminence. Both topical anesthetics and the placebo were randomly tested against each other bilaterally. For uniformity the agents were left in place for three minutes before needle insertion. A 27 gauge short needle mounted on an aspirating syringe was then inserted just past the bevel. Each subject rated the degree of pain on a visual analogue scale 100 mm in length. A pulse oximeter was used to record the heart rate. The results indicate that both topical anesthetics are significantly better than the placebo in reducing pain caused by needle insertion, although no statistically significant differences were found between the two topical anesthetics. Statistically significant differences in heart rate were seen, but these differences were not clinically significant. It is concluded that benzocaine 20% and lidocaine 5% significantly reduce the pain during needle insertion. PMID:2097909
Quantitative EEG analysis of the maturational changes associated with childhood absence epilepsy
NASA Astrophysics Data System (ADS)
Rosso, O. A.; Hyslop, W.; Gerlach, R.; Smith, R. L. L.; Rostas, J. A. P.; Hunter, M.
2005-10-01
This study aimed to examine the background electroencephalography (EEG) in children with childhood absence epilepsy, a condition whose presentation has strong developmental links. EEG hallmarks of absence seizure activity are widely accepted and there is recognition that the bulk of inter-ictal EEG in this group is normal to the naked eye. This multidisciplinary study aimed to use the normalized total wavelet entropy (NTWS) (Signal Processing 83 (2003) 1275) to examine the background EEG of those patients demonstrating absence seizure activity, and compare it with children without absence epilepsy. This calculation can be used to define the degree of order in a system, with higher levels of entropy indicating a more disordered (chaotic) system. Results were subjected to further statistical analyses of significance. Entropy values were calculated for patients versus controls. For all channels combined, patients with absence epilepsy showed (statistically significant) lower entropy values than controls. The size of the difference in entropy values was not uniform, with certain EEG electrodes consistently showing greater differences than others.
Bio-based renewable additives for anti-icing applications (phase one).
DOT National Transportation Integrated Search
2016-09-04
The performance and impacts of several bio-based anti-icers along with a traditional chloride-based anti-icer (salt brine) were evaluated. : A statistical design of experiments (uniform design) was employed for developing anti-icing liquids consistin...
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
Casimiro, Ana C; Vinga, Susana; Freitas, Ana T; Oliveira, Arlindo L
2008-02-07
Motif finding algorithms have developed in their ability to use computationally efficient methods to detect patterns in biological sequences. However the posterior classification of the output still suffers from some limitations, which makes it difficult to assess the biological significance of the motifs found. Previous work has highlighted the existence of positional bias of motifs in the DNA sequences, which might indicate not only that the pattern is important, but also provide hints of the positions where these patterns occur preferentially. We propose to integrate position uniformity tests and over-representation tests to improve the accuracy of the classification of motifs. Using artificial data, we have compared three different statistical tests (Chi-Square, Kolmogorov-Smirnov and a Chi-Square bootstrap) to assess whether a given motif occurs uniformly in the promoter region of a gene. Using the test that performed better in this dataset, we proceeded to study the positional distribution of several well known cis-regulatory elements, in the promoter sequences of different organisms (S. cerevisiae, H. sapiens, D. melanogaster, E. coli and several Dicotyledons plants). The results show that position conservation is relevant for the transcriptional machinery. We conclude that many biologically relevant motifs appear heterogeneously distributed in the promoter region of genes, and therefore, that non-uniformity is a good indicator of biological relevance and can be used to complement over-representation tests commonly used. In this article we present the results obtained for the S. cerevisiae data sets.
Benmarhnia, Tarik; Huang, Jonathan Y.; Jones, Catherine M.
2017-01-01
Background: Calls for evidence-informed public health policy, with implicit promises of greater program effectiveness, have intensified recently. The methods to produce such policies are not self-evident, requiring a conciliation of values and norms between policy-makers and evidence producers. In particular, the translation of uncertainty from empirical research findings, particularly issues of statistical variability and generalizability, is a persistent challenge because of the incremental nature of research and the iterative cycle of advancing knowledge and implementation. This paper aims to assess how the concept of uncertainty is considered and acknowledged in World Health Organization (WHO) policy recommendations and guidelines. Methods: We selected four WHO policy statements published between 2008-2013 regarding maternal and child nutrient supplementation, infant feeding, heat action plans, and malaria control to represent topics with a spectrum of available evidence bases. Each of these four statements was analyzed using a novel framework to assess the treatment of statistical variability and generalizability. Results: WHO currently provides substantial guidance on addressing statistical variability through GRADE (Grading of Recommendations Assessment, Development, and Evaluation) ratings for precision and consistency in their guideline documents. Accordingly, our analysis showed that policy-informing questions were addressed by systematic reviews and representations of statistical variability (eg, with numeric confidence intervals). In contrast, the presentation of contextual or "background" evidence regarding etiology or disease burden showed little consideration for this variability. Moreover, generalizability or "indirectness" was uniformly neglected, with little explicit consideration of study settings or subgroups. Conclusion: In this paper, we found that non-uniform treatment of statistical variability and generalizability factors that may contribute to uncertainty regarding recommendations were neglected, including the state of evidence informing background questions (prevalence, mechanisms, or burden or distributions of health problems) and little assessment of generalizability, alternate interventions, and additional outcomes not captured by systematic review. These other factors often form a basis for providing policy recommendations, particularly in the absence of a strong evidence base for intervention effects. Consequently, they should also be subject to stringent and systematic evaluation criteria. We suggest that more effort is needed to systematically acknowledge (1) when evidence is missing, conflicting, or equivocal, (2) what normative considerations were also employed, and (3) how additional evidence may be accrued. PMID:29179291
Isospectrals of non-uniform Rayleigh beams with respect to their uniform counterparts
Ganguli, Ranjan
2018-01-01
In this paper, we look for non-uniform Rayleigh beams isospectral to a given uniform Rayleigh beam. Isospectral systems are those that have the same spectral properties, i.e. the same free vibration natural frequencies for a given boundary condition. A transformation is proposed that converts the fourth-order governing differential equation of non-uniform Rayleigh beam into a uniform Rayleigh beam. If the coefficients of the transformed equation match with those of the uniform beam equation, then the non-uniform beam is isospectral to the given uniform beam. The boundary-condition configuration should be preserved under this transformation. We present the constraints under which the boundary configurations will remain unchanged. Frequency equivalence of the non-uniform beams and the uniform beam is confirmed by the finite-element method. For the considered cases, examples of beams having a rectangular cross section are presented to show the application of our analysis. PMID:29515879
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacko, M; Aldoohan, S
Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended undermore » simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT images.« less
Radio Occultation Investigation of the Rings of Saturn and Uranus
NASA Technical Reports Server (NTRS)
Marouf, Essam A.
1997-01-01
The proposed work addresses two main objectives: (1) to pursue the development of the random diffraction screen model for analytical/computational characterization of the extinction and near-forward scattering by ring models that include particle crowding, uniform clustering, and clustering along preferred orientations (anisotropy). The characterization is crucial for proper interpretation of past (Voyager) and future (Cassini) ring, occultation observations in terms of physical ring properties, and is needed to address outstanding puzzles in the interpretation of the Voyager radio occultation data sets; (2) to continue the development of spectral analysis techniques to identify and characterize the power scattered by all features of Saturn's rings that can be resolved in the Voyager radio occultation observations, and to use the results to constrain the maximum particle size and its abundance. Characterization of the variability of surface mass density among the main ring, features and within individual features is important for constraining the ring mass and is relevant to investigations of ring dynamics and origin. We completed the developed of the stochastic geometry (random screen) model for the interaction of electromagnetic waves with of planetary ring models; used the model to relate the oblique optical depth and the angular spectrum of the near forward scattered signal to statistical averages of the stochastic geometry of the randomly blocked area. WE developed analytical results based on the assumption of Poisson statistics for particle positions, and investigated the dependence of the oblique optical depth and angular spectrum on the fractional area blocked, vertical ring profile, and incidence angle when the volume fraction is small. Demonstrated agreement with the classical radiative transfer predictions for oblique incidence. Also developed simulation procedures to generate statistical realizations of random screens corresponding to uniformly packed ring models, and used the results to characterize dependence of the extinction and near-forward scattering on ring thickness, packing fraction, and the ring opening angle.
NASA Astrophysics Data System (ADS)
Libera, Arianna; de Barros, Felipe P. J.; Riva, Monica; Guadagnini, Alberto
2017-10-01
Our study is keyed to the analysis of the interplay between engineering factors (i.e., transient pumping rates versus less realistic but commonly analyzed uniform extraction rates) and the heterogeneous structure of the aquifer (as expressed by the probability distribution characterizing transmissivity) on contaminant transport. We explore the joint influence of diverse (a) groundwater pumping schedules (constant and variable in time) and (b) representations of the stochastic heterogeneous transmissivity (T) field on temporal histories of solute concentrations observed at an extraction well. The stochastic nature of T is rendered by modeling its natural logarithm, Y = ln T, through a typical Gaussian representation and the recently introduced Generalized sub-Gaussian (GSG) model. The latter has the unique property to embed scale-dependent non-Gaussian features of the main statistics of Y and its (spatial) increments, which have been documented in a variety of studies. We rely on numerical Monte Carlo simulations and compute the temporal evolution at the well of low order moments of the solute concentration (C), as well as statistics of the peak concentration (Cp), identified as the environmental performance metric of interest in this study. We show that the pumping schedule strongly affects the pattern of the temporal evolution of the first two statistical moments of C, regardless the nature (Gaussian or non-Gaussian) of the underlying Y field, whereas the latter quantitatively influences their magnitude. Our results show that uncertainty associated with C and Cp estimates is larger when operating under a transient extraction scheme than under the action of a uniform withdrawal schedule. The probability density function (PDF) of Cp displays a long positive tail in the presence of time-varying pumping schedule. All these aspects are magnified in the presence of non-Gaussian Y fields. Additionally, the PDF of Cp displays a bimodal shape for all types of pumping schemes analyzed, independent of the type of heterogeneity considered.
Effects of fixture rotation on coating uniformity for high-performance optical filter fabrication
NASA Astrophysics Data System (ADS)
Rubin, Binyamin; George, Jason; Singhal, Riju
2018-04-01
Coating uniformity is critical in fabricating high-performance optical filters by various vacuum deposition methods. Simple and planetary rotation systems with shadow masks are used to achieve the required uniformity [J. B. Oliver and D. Talbot, Appl. Optics 45, 13, 3097 (2006); O. Lyngnes, K. Kraus, A. Ode and T. Erguder, in `Method for Designing Coating Thickness Uniformity Shadow Masks for Deposition Systems with a Planetary Fixture', 2014 Technical Conference Proceedings, Optical Coatings, August 13, 2014, DOI: 10.14332/svc14.proc.1817.]. In this work, we discuss the effect of rotation pattern and speed on thickness uniformity in an ion beam sputter deposition system. Numerical modeling is used to determine statistical distribution of random thickness errors in coating layers. The relationship between thickness tolerance and production yield are simulated theoretically and demonstrated experimentally. Production yields for different optical filters produced in an ion beam deposition system with planetary rotation are presented. Single-wavelength and broadband optical monitoring systems were used for endpoint monitoring during filter deposition. Limitations of thickness tolerances that can be achieved in systems with planetary rotation are shown. Paths for improving production yield in an ion beam deposition system are described.
Development of Uniform Protocol for Alopecia Areata Clinical Trials.
Solomon, James A
2015-11-01
Developing a successful treatment for alopecia areata (AA), clearly has not been at the forefront of the agenda for new drug/device development among the pharmaceutical and medical device industry. The National Alopecia Areata Foundation (NAAF), a patient advocacy group, initiated a plan to facilitate and drive clinical research toward finding safe and efficacious treatments for AA. As such, Alopecia Areata Uniform Protocols for clinical trials to test new treatments for AA were developed. The design of the uniform protocol is to accomplish the development of a plug-and-play template as well as to provide a framework wherein data from studies utilizing the uniform protocol can be compared through consistency of inclusions/exclusions, safety, and outcome assessment measures. A core uniform protocol for use by pharmaceutical companies in testing proof of concept for investigational products to treat AA. The core protocol includes standardized title, informed consent, inclusion/exclusion criteria, disease outcome assessments, and safety assessments. The statistical methodology to assess successful outcomes will also be standardized. The protocol as well as the informed consent form has been approved in concept by Liberty IRB and is ready to present to pharmaceutical companies.
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling.
Nord, Camilla L; Valton, Vincent; Wood, John; Roiser, Jonathan P
2017-08-23
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered-some very seriously so-but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. Copyright © 2017 Nord, Valton et al.
A five year review of paediatric burns and social deprivation: Is there a link?
Richards, Helen; Kokocinska, Maria; Lewis, Darren
2017-09-01
To establish if there is a correlation between burn incidence and social deprivation in order to formulate a more effective burns prevention strategy. A quantitative retrospective review of International Burn Injury Database (IBID) was carried out over a period from 2006 to 2011 to obtain data for children referred to our burns centre in West Midlands. Social deprivation scores for geographical areas were obtained from Office of National Statistics (ONS). Statistical analysis was carried out using Graphpad Prism. 1688 children were reviewed at our burns centre. Statistical analysis using Pearson correlation coefficient showed a slight association between social deprivation and increasing burn incidence r 2 =0.1268, 95% confidence interval 0.018-0.219, p value<0.0001. There was a slight male preponderance (58%). The most common mechanism of injury was scalding (61%). The most commonly affected age group were 1-2 year olds (38%). There were statistically significant differences in the ethnicity of children with significantly more children from Asian and African backgrounds being referred compared to Caucasian children. We found that appropriate first aid was administered in 67% of cases overall. We did not find a statistically significant link between first aid provision and social deprivation score. There was only a slight positive correlation between social deprivation and burn incidence. However, there did not seem to be any change in mechanism of burn in the most deprived groups compared to overall pattern, nor was there a significant difference in appropriate first aid provision. It would seem that dissemination of burn prevention strategies and first aid advice need to be improved across all geographical areas as this was uniformly lacking and the increased burn incidence in more socially deprived groups, although present, was not statistically significant. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.
Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T
2013-01-01
Background The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. Objective This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. Methods We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Findings Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax. PMID:23792324
Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T
2014-03-01
The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax.
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Fertigation uniformity under sprinkler irrigation: evaluation and analysis
USDA-ARS?s Scientific Manuscript database
n modern farming systems, fertigation is widely practiced as a cost effective and convenient method for applying soluble fertilizers to crops. Along with efficiency and adequacy, uniformity is an important fertigation performance evaluation criterion. Fertigation uniformity is defined here as a comp...
Continuous-variable quantum key distribution in uniform fast-fading channels
NASA Astrophysics Data System (ADS)
Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano
2018-03-01
We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.
2009-08-01
Mike Wilson, Westat, Inc. developed weights for this survey. Westat performed data collection and editing. DMDC’s Survey Technology Branch, under...STATISTICAL METHODOLOGY REPORT Executive Summary The Uniformed and Overseas Citizens Absentee Voting Act of 1986 (UOCAVA), 42 USC 1973ff, permits members of...citizens covered by UOCAVA, (2) to assess the impact of the FVAP’s efforts to simplify and ease the process of voting absentee , (3) to evaluate other
Heat Capacity Mapping Mission (HCMM): Interpretation of imagery over Canada
NASA Technical Reports Server (NTRS)
Cihlar, J. (Principal Investigator); Dixon, R. G.
1981-01-01
Visual analysis of HCMM images acquired over two sites in Canada and supporting aircraft and ground data obtained at a smaller subsite in Alberta show that nightime surface temperature distribution is primarily related to the near-surface air temperature; the effects of topography, wind, and land cover were low or indirect through air temperature. Surface cover and large altitudinal differences were important parameters influencing daytime apparent temperature values. A quantitative analysis of the relationship between the antecedent precipitation index and the satellite thermal IR measurements did not yield statistically significant correlation coefficients, but the correlations had a definite temporal trend which could be related to the increasing uniformity of vegetation cover. The large pixel size (resulting in a mixture of cover types and soil/canopy temperatures measured by the satellite) and high cloud cover frequency found in images covering both Canadian sites and northern U.S. were considered the main deficiencies of the thermal satellite data.
Stocker, Elena; Toschkoff, Gregor; Sacher, Stephan; Khinast, Johannes G
2014-11-20
The purpose of this study is to evaluate the use of computer simulations for generating quantitative knowledge as a basis for risk ranking and mechanistic process understanding, as required by ICH Q9 on quality risk management systems. In this specific publication, the main focus is the demonstration of a risk assessment workflow, including a computer simulation for the generation of mechanistic understanding of active tablet coating in a pan coater. Process parameter screening studies are statistically planned under consideration of impacts on a potentially critical quality attribute, i.e., coating mass uniformity. Based on computer simulation data the process failure mode and effects analysis of the risk factors is performed. This results in a quantitative criticality assessment of process parameters and the risk priority evaluation of failure modes. The factor for a quantitative reassessment of the criticality and risk priority is the coefficient of variation, which represents the coating mass uniformity. The major conclusion drawn from this work is a successful demonstration of the integration of computer simulation in the risk management workflow leading to an objective and quantitative risk assessment. Copyright © 2014. Published by Elsevier B.V.
Harrison, Ute; Fowora, Muinah A.; Seriki, Abiodun T.; Loell, Eva; Mueller, Susanna; Ugo-Ijeh, Margaret; Onyekwere, Charles A.; Lesi, Olufunmilayo A.; Otegbayo, Jesse A.; Akere, Adegboyega; Ndububa, Dennis A.; Adekanle, Olusegun; Anomneze, Ebere; Abdulkareem, Fatimah B.; Adeleye, Isaac A.; Crispin, Alexander; Rieder, Gabriele; Fischer, Wolfgang; Smith, Stella I.; Haas, Rainer
2017-01-01
Antibiotic resistance in Helicobacter pylori is a factor preventing its successful eradication. Particularly in developing countries, resistance against commonly used antibiotics is widespread. Here, we present an epidemiological study from Nigeria with 111 isolates. We analyzed the associated disease outcome, and performed a detailed characterization of these isolated strains with respect to their antibiotic susceptibility and their virulence characteristics. Furthermore, statistical analysis was performed on microbiological data as well as patient information and the results of the gastroenterological examination. We found that the variability concerning the production of virulence factors between strains was minimal, with 96.4% of isolates being CagA-positive and 92.8% producing detectable VacA levels. In addition, high frequency of bacterial resistance was observed for metronidazole (99.1%), followed by amoxicillin (33.3%), clarithromycin (14.4%) and tetracycline (4.5%). In conclusion, this study indicated that the infection rate of H. pylori infection within the cohort in the present study was surprisingly low (36.6%). Furthermore, an average gastric pathology was observed by histological grading and bacterial isolates showed a uniform pathogenicity profile while indicating divergent antibiotic resistance rates. PMID:28463973
Lagrangian analysis by clustering. An example in the Nordic Seas.
NASA Astrophysics Data System (ADS)
Koszalka, Inga; Lacasce, Joseph H.
2010-05-01
We propose a new method for obtaining average velocities and eddy diffusivities from Lagrangian data. Rather than grouping the drifter-derived velocities in uniform geographical bins, as is commonly done, we group a specified number of nearest-neighbor velocities. This is done via a clustering algorithm operating on the instantaneous positions of the drifters. Thus it is the data distribution itself which determines the positions of the averages and the areal extent of the clusters. A major advantage is that because the number of members is essentially the same for all clusters, the statistical accuracy is more uniform than with geographical bins. We illustrate the technique using synthetic data from a stochastic model, employing a realistic mean flow. The latter is an accurate representation of the surface currents in the Nordic Seas and is strongly inhomogeneous in space. We use the clustering algorithm to extract the mean velocities and diffusivities (both of which are known from the stochastic model). We also compare the results to those obtained with fixed geographical bins. Clustering is more successful at capturing spatial variability of the mean flow and also improves convergence in the eddy diffusivity estimates. We discuss both the future prospects and shortcomings of the new method.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Code of Federal Regulations, 2010 CFR
2010-07-01
... Administration DEPARTMENT OF JUSTICE (CONTINUED) GRANTS FOR CORRECTIONAL FACILITIES General § 91.2 Definitions... for purposes of the Uniform Crime Reports. If such data is unavailable, Bureau of Justice Statistics... facilities) and job skills programs, educational programs, a pre-release prisoner assessment to provide risk...
Calibration of LRFR live load factors using weigh-in-motion data.
DOT National Transportation Integrated Search
2006-06-01
The Load and Resistance Factor Rating (LRFR) code for load rating bridges is based on factors calibrated from structural : load and resistance statistics to achieve a more uniform level of reliability for all bridges. The liveload factors in the : LR...
14 CFR 19-2 - Maintenance of data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Operating Statistics Classifications Sec. 19-2 Maintenance of data. (a) Each air carrier required to file... in accordance with the uniform classifications prescribed. Codes are prescribed for each operating... flight numbers. The second grouping requires that the enplanement/deplanement information be broken out...
A Uniform Approach to Type Theory
1989-01-01
logical and statistical techniques. There is no comprehensive survey on implementation issues. Some partial aspects are described in...U. de Paris (1930). In: Ecrits logiques de Jacques Herbrand, PUF Paris (1968). [71] C. M. Hoffmann, M. J. O’Donnell. "Programming with Equations
Xia, Yun; Yan, Shuangqian; Zhang, Xian; Ma, Peng; Du, Wei; Feng, Xiaojun; Liu, Bi-Feng
2017-03-21
Digital loop-mediated isothermal amplification (dLAMP) is an attractive approach for absolute quantification of nucleic acids with high sensitivity and selectivity. Theoretical and numerical analysis of dLAMP provides necessary guidance for the design and analysis of dLAMP devices. In this work, a mathematical model was proposed on the basis of the Monte Carlo method and the theories of Poisson statistics and chemometrics. To examine the established model, we fabricated a spiral chip with 1200 uniform and discrete reaction chambers (9.6 nL) for absolute quantification of pathogenic DNA samples by dLAMP. Under the optimized conditions, dLAMP analysis on the spiral chip realized quantification of nucleic acids spanning over 4 orders of magnitude in concentration with sensitivity as low as 8.7 × 10 -2 copies/μL in 40 min. The experimental results were consistent with the proposed mathematical model, which could provide useful guideline for future development of dLAMP devices.
NASA Astrophysics Data System (ADS)
Diehl, Stefan; Bremer, Daniel; Brinkmann, Kai-Thomas; Dormenev, Valery; Eissner, Tobias; Novotny, Rainer W.; Rosenbaum, Christoph; Zaunick, Hans-Georg; PANDA Collaboration
2017-06-01
The uniformity of the light collection is a crucial parameter for detectors based on inorganic scintillation crystals to guarantee a response proportional to the deposited energy. Especially in case of tapered crystals, like they are widely used to realize a 4π geometry of electromagnetic calorimeters (EMC) in high energy physics experiments, a strong non-uniformity is introduced by an additional focusing of the scintillation light due to the tapered geometry. The paper will discuss the determination and the reduction of the non-uniformity in strongly tapered lead tungstate crystals as used for the construction of the electromagnetic calorimeter of the PANDA detector at the future Facility for Antiproton and Ion Research (FAIR). Among different concepts for an uniformization a single de-polished lateral side face provided the optimum result with a remaining non-uniformity below 5% in good agreement with similar studies for the CMS ECAL at LHC. The impact on the achievable energy resolution in the energy regime of photons below 800 MeV is discussed in detail in comparison to GEANT4 simulations. The comparison of the response of two arrays with polished and de-polished crystals, respectively, shows in the latter case a significant improvement of the constant term of the parametrization of the energy resolution down to 0.5% accompanied by only very slight increase of the statistical term.
Ervasti, Ilpo; Miranda, Ruben; Kauranen, Ilkka
2016-02-01
A global, comprehensive review of terms and definitions related to paper recycling was conducted in this article. Terms and definitions related to paper recycling have varied in the course of time. Different terms and different definitions for the same thing are being used in different geographical regions and by different organizations. Definitions are different based on varying conceptions of waste paper as a raw material. Definitions of how to make various calculations related to paper recycling activity are inconsistent. Even such fundamental basic definitions like how to calculate recycling rate and paper consumption are not uniform. It could be concluded that there is no uniform system of terms and definitions related to paper recycling and the implications of this deficiency are profound. For example, it is difficult to reliably compare with each other statistics from different times and from different geographical regions. It is not possible to measure if targets for recycling activities are met if the terms describing the targets are not uniformly defined. In cases of reporting data for recycling targets, the lack of uniform terminology can, for example, impede the necessary transparency between different stakeholders and may allow for deception. The authors conclude there is a pressing need to develop a uniform system of terms and definition for terms related to paper recycling. Copyright © 2015 Elsevier Ltd. All rights reserved.
Middleton, Mark; Frantzis, Jim; Healy, Brendan; Jones, Mark; Murry, Rebecca; Kron, Tomas; Plank, Ashley; Catton, Charles; Martin, Jarad
2011-12-01
The quality assurance (QA) of image-guided radiation therapy (IGRT) within clinical trials is in its infancy, but its importance will continue to grow as IGRT becomes the standard of care. The purpose of this study was to demonstrate the feasibility of IGRT QA as part of the credentialing process for a clinical trial. As part of the accreditation process for a randomized trial in prostate cancer hypofraction, IGRT benchmarking across multiple sites was incorporated. Each participating site underwent IGRT credentialing via a site visit. In all centers, intraprostatic fiducials were used. A real-time assessment of analysis of IGRT was performed using Varian's Offline Review image analysis package. Two-dimensional (2D) kV and MV electronic portal imaging prostate patient datasets were used, consisting of 39 treatment verification images for 2D/2D comparison with the digitally reconstructed radiograph derived from the planning scan. The influence of differing sites, image modality, and observer experience on IGRT was then assessed. Statistical analysis of the mean mismatch errors showed that IGRT analysis was performed uniformly regardless of institution, therapist seniority, or imaging modality across the three orthogonal planes. The IGRT component of clinical trials that include sophisticated planning and treatment protocols must undergo stringent QA. The IGRT technique of intraprostatic fiducials has been shown in the context of this trial to be undertaken in a uniform manner across Australia. Extending this concept to many sites with different equipment and IGRT experience will require a robust remote credentialing process. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Monitoring and analysis of combustion aerosol emissions from fast moving diesel trains.
Burchill, Michael J; Gramotnev, Dmitri K; Gramotnev, Galina; Davison, Brian M; Flegg, Mark B
2011-02-01
In this paper we report the results of the detailed monitoring and analysis of combustion emissions from fast moving diesel trains. A new highly efficient monitoring methodology is proposed based on the measurements of the total number concentration (TNC) of combustion aerosols at a fixed point (on a bridge overpassing the railway) inside the violently mixing zone created by a fast moving train. Applicability conditions for the proposed methodology are presented, discussed and linked to the formation of the stable and uniform mixing zone. In particular, it is demonstrated that if such a mixing zone is formed, the monitoring results are highly consistent, repeatable (with typically negligible statistical errors and dispersion), stable with respect to the external atmospheric turbulence and result in an unusual pattern of the aerosol evolution with two or three distinct TNC maximums. It is also shown that the stability and uniformity of the created mixing zone (as well as the repeatability of the monitoring results) increase with increasing length of the train (with an estimated critical train length of ~10 carriages, at the speed of ~150km/h). The analysis of the obtained evolutionary dependencies of aerosol TNC suggests that the major possible mechanisms responsible for the formation of the distinct concentration maximums are condensation (the second maximum) and thermal fragmentation of solid nanoparticle aggregates (third maximum). The obtained results and the new methodology will be important for monitoring and analysis of combustion emissions from fast moving trains, and for the determination of the impact of rail networks on the atmospheric environment and human exposure to combustion emissions. Copyright © 2010 Elsevier B.V. All rights reserved.
Bayesian analysis of the kinetics of quantal transmitter secretion at the neuromuscular junction.
Saveliev, Anatoly; Khuzakhmetova, Venera; Samigullin, Dmitry; Skorinkin, Andrey; Kovyazina, Irina; Nikolsky, Eugeny; Bukharaeva, Ellya
2015-10-01
The timing of transmitter release from nerve endings is considered nowadays as one of the factors determining the plasticity and efficacy of synaptic transmission. In the neuromuscular junction, the moments of release of individual acetylcholine quanta are related to the synaptic delays of uniquantal endplate currents recorded under conditions of lowered extracellular calcium. Using Bayesian modelling, we performed a statistical analysis of synaptic delays in mouse neuromuscular junction with different patterns of rhythmic nerve stimulation and when the entry of calcium ions into the nerve terminal was modified. We have obtained a statistical model of the release timing which is represented as the summation of two independent statistical distributions. The first of these is the exponentially modified Gaussian distribution. The mixture of normal and exponential components in this distribution can be interpreted as a two-stage mechanism of early and late periods of phasic synchronous secretion. The parameters of this distribution depend on both the stimulation frequency of the motor nerve and the calcium ions' entry conditions. The second distribution was modelled as quasi-uniform, with parameters independent of nerve stimulation frequency and calcium entry. Two different probability density functions for the distribution of synaptic delays suggest at least two independent processes controlling the time course of secretion, one of them potentially involving two stages. The relative contribution of these processes to the total number of mediator quanta released depends differently on the motor nerve stimulation pattern and on calcium ion entry into nerve endings.
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Agüeros, Marcel A.; Fournier, Amanda P.; Street, Rachel; Ofek, Eran O.; Covey, Kevin R.; Levitan, David; Laher, Russ R.; Sesar, Branimir; Surace, Jason
2014-01-01
Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ~20,000 deg2 footprint. While the median 7.26 deg2 PTF field has been imaged ~40 times in the R band, ~2300 deg2 have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 109 light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Zheng, X; Liu, H
Purpose: This study is to evaluate the feasibility of simultaneously integrated boost (SIB) to hypoxic subvolume (HTV) in nasopharyngeal carcinomas under the guidance of 18F-Fluoromisonidazole (FMISO) PET/CT using a novel non-uniform volumetric modulated arc therapy (VMAT)technique. Methods: Eight nasopharyngeal carcinoma patients treated with conventional uniform VMAT were retrospectively analyzed. For each treatment, actual conventional uniform VMAT plan with two or more arcs (2–2.5 arcs, totally rotating angle < 1000o) was designed with dose boost to hopxic subvolume (total dose, 84Gy) in the gross tumor volme (GTV) under the guidance of 18F- FMISO PET/CT. Based on the same dataset, experimental singlemore » arc non-uniform VAMT plans were generated with the same dose prescription using customized software tools. Dosimetric parameters, quality assurance and the efficiency of the treatment delivery were compared between the uniform and non-uniform VMAT plans. Results: To develop the non-uniform VMAT technique, a specific optimization model was successfully established. Both techniques generate high-quality plans with pass rate (>98%) with the 3mm, 3% criterion. HTV received dose of 84.1±0.75Gy and 84.1±1.2Gy from uniform and non-uniform VMAT plans, respectively. In terms of target coverage and dose homogeneity, there was no significant statistical difference between actual and experimental plans for each case. However, for critical organs at risk (OAR), including the parotids, oral cavity and larynx, dosimetric difference was significant with better dose sparing form experimental plans. Regarding plan implementation efficiency, the average machine time was 3.5 minutes for the actual VMAT plans and 3.7 minutes for the experimental nonuniform VMAT plans (p>0.050). Conclusion: Compared to conventional VMAT technique, the proposed non-uniform VMAT technique has the potential to produce efficient and safe treatment plans, especially in cases with complicated anatomical structures and demanding dose boost to subvolumes.« less
Synthesis of stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Thornton, W. A.
1974-01-01
Computer programs for the synthesis of shells of various configurations were developed. The conditions considered are: (1) uniform shells (mainly cones) using a membrane buckling analysis, (2) completely uniform shells (cones, spheres, toroidal segments) using linear bending prebuckling analysis, and (3) revision of second design process to reduce the number of design variables to about 30 by considering piecewise uniform designs. A perturbation formula was derived and this allows exact derivatives of the general buckling load to be computed with little additional computer time.
Metrology: Calibration and measurement processes guidelines
NASA Technical Reports Server (NTRS)
Castrup, Howard T.; Eicke, Woodward G.; Hayes, Jerry L.; Mark, Alexander; Martin, Robert E.; Taylor, James L.
1994-01-01
The guide is intended as a resource to aid engineers and systems contracts in the design, implementation, and operation of metrology, calibration, and measurement systems, and to assist NASA personnel in the uniform evaluation of such systems supplied or operated by contractors. Methodologies and techniques acceptable in fulfilling metrology quality requirements for NASA programs are outlined. The measurement process is covered from a high level through more detailed discussions of key elements within the process, Emphasis is given to the flowdown of project requirements to measurement system requirements, then through the activities that will provide measurements with defined quality. In addition, innovations and techniques for error analysis, development of statistical measurement process control, optimization of calibration recall systems, and evaluation of measurement uncertainty are presented.
A general scientific information system to support the study of climate-related data
NASA Technical Reports Server (NTRS)
Treinish, L. A.
1984-01-01
The development and use of NASA's Pilot Climate Data System (PCDS) are discussed. The PCDS is used as a focal point for managing and providing access to a large collection of actively used data for the Earth, ocean and atmospheric sciences. The PCDS provides uniform data catalogs, inventories, and access methods for selected NASA and non-NASA data sets. Scientific users can preview the data sets using graphical and statistical methods. The system has evolved from its original purpose as a climate data base management system in response to a national climate program, into an extensive package of capabilities to support many types of data sets from both spaceborne and surface based measurements with flexible data selection and analysis functions.
The Role of Occupational Identification During Post-Merger Integration
Kroon, David P.; Noorderhaven, Niels G.
2016-01-01
Integration processes after mergers are fraught with difficulties, and constitute a main cause of merger failure. This study focuses on the human aspect of post-merger integration, and in particular, on the role of occupational identification. We theorize and empirically demonstrate by means of a survey design that employees’ identification with their occupation is positively related to their willingness to cooperate in the post-merger integration process, over and above the effect of organization members’ organizational identification. This positive effect of occupational identification is stronger for uniformed personnel but attenuates in the course of the integration process. Qualitative interviews further explore and interpret the results from our statistical analysis. Together, these findings have important practical implications and suggest future research directions. PMID:29568214
Digital Reconstruction of 3D Polydisperse Dry Foam
NASA Astrophysics Data System (ADS)
Chieco, A.; Feitosa, K.; Roth, A. E.; Korda, P. T.; Durian, D. J.
2012-02-01
Dry foam is a disordered packing of bubbles that distort into familiar polyhedral shapes. We have implemented a method that uses optical axial tomography to reconstruct the internal structure of a dry foam in three dimensions. The technique consists of taking a series of photographs of the dry foam against a uniformly illuminated background at successive angles. By summing the projections we create images of the foam cross section. Image analysis of the cross sections allows us to locate Plateau borders and vertices. The vertices are then connected according to Plateau's rules to reconstruct the internal structure of the foam. Using this technique we are able to visualize a large number of bubbles of real 3D foams and obtain statistics of faces and edges.
LANDSAT survey of near-shore ice conditions along the Arctic coast of Alaska
NASA Technical Reports Server (NTRS)
Stringer, W. J. (Principal Investigator); Barrett, S. A.
1978-01-01
The author has identified the following significant results. Winter and spring near-shore ice conditions were analyzed for the Beaufort Sea 1973-77, and the Chukchi Sea 1973-76. LANDSAT imagery was utilized to map major ice features related to regional ice morphology. Significant features from individual LANDSAT image maps were combined to yield regional maps of major ice ridge systems for each year of study and maps of flaw lead systems for representative seasons during each year. These regional maps were, in turn, used to prepare seasonal ice morphology maps. These maps showed, in terms of a zonal analysis, regions of statistically uniform ice behavior. The behavioral characteristics of each zone were described in terms of coastal processes and bathymetric configuration.
Clinical Study of the 3D-Master Color System among the Spanish Population.
Gómez-Polo, Cristina; Gómez-Polo, Miguel; Martínez Vázquez de Parga, Juan Antonio; Celemín-Viñuela, Alicia
2017-01-12
To study whether the shades of the 3D-Master System were grouped and represented in the chromatic space according to the three-color coordinates of value, chroma, and hue. Maxillary central incisor color was measured on tooth surfaces through the Easyshade Compact spectrophotometer using 1361 participants aged between 16 and 89. The natural (not bleached teeth) color of the middle thirds was registered in the 3D-Master System nomenclature and in the CIELCh system. Principal component analysis and cluster analysis were applied. 75 colors of the 3D-Master System were found. The statistical analysis revealed the existence of 5 cluster groups. The centroid, the average of the 75 samples, in relation to lightness (L*) was 74.64, 22.87 for chroma (C*), and 88.85 for hue (h*). All of the clusters, except cluster 3, showed significant statistical differences with the centroid for the three-color coordinates (p <0.001). The results of this study indicated that 75 shades in the 3D-Master System were grouped into 5 clusters following coordinates L*, C*, and h* resulting from the dental spectrophotometer Vita Easyshade compact. The shades that composed each cluster did not belong to the same lightness color dimension groups. There was no special uniform chromatic distribution among the colors of the 3D-Master System. © 2017 by the American College of Prosthodontists.
Spline analysis of the mandible in human subjects with class III malocclusion.
Singh, G D; McNamara, J A; Lozanoff, S
1997-05-01
This study determines deformations that contribute to a Class III mandibular morphology, employing thin-plate spline (TPS) analysis. A total of 133 lateral cephalographs of prepubertal children of European-American descent with either a Class I molar occlusion or a Class III malocclusion were compared. The cephalographs were traced and checked, and eight homologous landmarks on the mandible were identified and digitized. The datasets were scaled to an equivalent size and subjected to statistical analyses. These tests indicated significant differences between average Class I and Class III mandibular morphologies. When the sample was subdivided into seven age and sex-matched groups statistical differences were maintained for each group. TPS analysis indicated that both affine (uniform) and non-affine transformations contribute towards the total spline, and towards the average mandibular morphology at each age group. For non-affine transformations, partial warp 5 had the highest magnitude, indicating large-scale deformations of the mandibular configuration between articulare and pogonion. In contrast, partial warp 1 indicated localized shape changes in the mandibular symphyseal region. It is concluded that large spatial-scale deformations affect the body of the mandible, in combination with localized distortions further anteriorly. These deformations may represent a developmental elongation of the mandibular corpus antero-posteriorly that, allied with symphyseal changes, leads to the appearance of a Class III prognathic mandibular profile.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of thismore » object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.« less
Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils
Boyes, Richard G.; Gunter, Jeff L.; Frost, Chris; Janke, Andrew L.; Yeatman, Thomas; Hill, Derek L.G.; Bernstein, Matt A.; Thompson, Paul M.; Weiner, Michael W.; Schuff, Norbert; Alexander, Gene E.; Killiany, Ronald J.; DeCarli, Charles; Jack, Clifford R.; Fox, Nick C.
2008-01-01
Measures of structural brain change based on longitudinal MR imaging are increasingly important but can be degraded by intensity non-uniformity. This non-uniformity can be more pronounced at higher field strengths, or when using multichannel receiver coils. We assessed the ability of the non-parametric non-uniform intensity normalization (N3) technique to correct non-uniformity in 72 volumetric brain MR scans from the preparatory phase of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Normal elderly subjects (n = 18) were scanned on different 3-T scanners with a multichannel phased array receiver coil at baseline, using magnetization prepared rapid gradient echo (MP-RAGE) and spoiled gradient echo (SPGR) pulse sequences, and again 2 weeks later. When applying N3, we used five brain masks of varying accuracy and four spline smoothing distances (d = 50, 100, 150 and 200 mm) to ascertain which combination of parameters optimally reduces the non-uniformity. We used the normalized white matter intensity variance (standard deviation/mean) to ascertain quantitatively the correction for a single scan; we used the variance of the normalized difference image to assess quantitatively the consistency of the correction over time from registered scan pairs. Our results showed statistically significant (p < 0.01) improvement in uniformity for individual scans and reduction in the normalized difference image variance when using masks that identified distinct brain tissue classes, and when using smaller spline smoothing distances (e.g., 50-100 mm) for both MP-RAGE and SPGR pulse sequences. These optimized settings may assist future large-scale studies where 3-T scanners and phased array receiver coils are used, such as ADNI, so that intensity non-uniformity does not influence the power of MR imaging to detect disease progression and the factors that influence it. PMID:18063391
NASA Astrophysics Data System (ADS)
Roesler, E. L.; Bosler, P. A.; Taylor, M.
2016-12-01
The impact of strong extratropical storms on coastal communities is large, and the extent to which storms will change with a warming Arctic is unknown. Understanding storms in reanalysis and in climate models is important for future predictions. We know that the number of detected Arctic storms in reanalysis is sensitive to grid resolution. To understand Arctic storm sensitivity to resolution in climate models, we describe simulations designed to identify and compare Arctic storms at uniform low resolution (1 degree), at uniform high resolution (1/8 degree), and at variable resolution (1 degree to 1/8 degree). High-resolution simulations resolve more fine-scale structure and extremes, such as storms, in the atmosphere than a uniform low-resolution simulation. However, the computational cost of running a globally uniform high-resolution simulation is often prohibitive. The variable resolution tool in atmospheric general circulation models permits regional high-resolution solutions at a fraction of the computational cost. The storms are identified using the open-source search algorithm, Stride Search. The uniform high-resolution simulation has over 50% more storms than the uniform low-resolution and over 25% more storms than the variable resolution simulations. Storm statistics from each of the simulations is presented and compared with reanalysis. We propose variable resolution as a cost-effective means of investigating physics/dynamics coupling in the Arctic environment. Future work will include comparisons with observed storms to investigate tuning parameters for high resolution models. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2016-7402 A
Changing Pattern of Indian Monsoon Extremes: Global and Local Factors
NASA Astrophysics Data System (ADS)
Ghosh, Subimal; Shastri, Hiteshri; Pathak, Amey; Paul, Supantha
2017-04-01
Indian Summer Monsoon Rainfall (ISMR) extremes have remained a major topic of discussion in the field of global change and hydro-climatology over the last decade. This attributes to multiple conclusions on changing pattern of extremes along with poor understanding of multiple processes at global and local scales associated with monsoon extremes. At a spatially aggregate scale, when number of extremes in the grids are summed over, a statistically significant increasing trend is observed for both Central India (Goswami et al., 2006) and all India (Rajeevan et al., 2008). However, such a result over Central India does not satisfy flied significance test of increase and no decrease (Krishnamurthy et al., 2009). Statistically rigorous extreme value analysis that deals with the tail of the distribution reveals a spatially non-uniform trend of extremes over India (Ghosh et al., 2012). This results into statistically significant increasing trend of spatial variability. Such an increase of spatial variability points to the importance of local factors such as deforestation and urbanization. We hypothesize that increase of spatial average of extremes is associated with the increase of events occurring over large region, while increase in spatial variability attributes to local factors. A Lagrangian approach based dynamic recycling model reveals that the major contributor of moisture to wide spread extremes is Western Indian Ocean, while land surface also contributes around 25-30% of moisture during the extremes in Central India. We further test the impacts of local urbanization on extremes and find the impacts are more visible over West central, Southern and North East India. Regional atmospheric simulations coupled with Urban Canopy Model (UCM) shows that urbanization intensifies extremes in city areas, but not uniformly all over the city. The intensification occurs over specific pockets of the urban region, resulting an increase in spatial variability even within the city. This also points to the need of setting up multiple weather stations over the city at a finer resolution for better understanding of urban extremes. We conclude that the conventional method of considering large scale factors is not sufficient for analysing the monsoon extremes and characterization of the same needs a blending of both global and local factors. Ghosh, S., Das, D., Kao, S-C. & Ganguly, A. R. Lack of uniform trends but increasing spatial variability in observed Indian rainfall extremes. Nature Clim. Change 2, 86-91 (2012) Goswami, B. N., Venugopal, V., Sengupta, D., Madhusoodanan, M. S. & Xavier, P. K. Increasing trend of extreme rain events over India in a warming environment. Science 314, 1442-1445 (2006). Krishnamurthy, C. K. B., Lall, U. & Kwon, H-H. Changing frequency and intensity of rainfall extremes over India from 1951 to 2003. J. Clim. 22, 4737-4746 (2009). Rajeevan, M., Bhate, J. & Jaswal, A. K. Analysis of variability and trends of extreme rainfall events over India using 104 years of gridded daily rainfall data. Geophys. Res. Lett. 35, L18707 (2008).
42 CFR 447.253 - Other requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... intermediate care services) under conditions similar to those described in section 1861(v)(1)(G) of the Act... agency determines appropriate, of payment rates. (f) Uniform cost reporting. The Medicaid agency must... Medicaid agency must provide for periodic audits of the financial and statistical records of participating...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2011 CFR
2011-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
44 CFR 13.42 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-10-01
... AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... to all financial and programmatic records, supporting documents, statistical records, and other... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
Mosheiff, Noga; Agmon, Haggai; Moriel, Avraham; Burak, Yoram
2017-06-01
Grid cells in the entorhinal cortex encode the position of an animal in its environment with spatially periodic tuning curves with different periodicities. Recent experiments established that these cells are functionally organized in discrete modules with uniform grid spacing. Here we develop a theory for efficient coding of position, which takes into account the temporal statistics of the animal's motion. The theory predicts a sharp decrease of module population sizes with grid spacing, in agreement with the trend seen in the experimental data. We identify a simple scheme for readout of the grid cell code by neural circuitry, that can match in accuracy the optimal Bayesian decoder. This readout scheme requires persistence over different timescales, depending on the grid cell module. Thus, we propose that the brain may employ an efficient representation of position which takes advantage of the spatiotemporal statistics of the encoded variable, in similarity to the principles that govern early sensory processing.
Mosheiff, Noga; Agmon, Haggai; Moriel, Avraham
2017-01-01
Grid cells in the entorhinal cortex encode the position of an animal in its environment with spatially periodic tuning curves with different periodicities. Recent experiments established that these cells are functionally organized in discrete modules with uniform grid spacing. Here we develop a theory for efficient coding of position, which takes into account the temporal statistics of the animal’s motion. The theory predicts a sharp decrease of module population sizes with grid spacing, in agreement with the trend seen in the experimental data. We identify a simple scheme for readout of the grid cell code by neural circuitry, that can match in accuracy the optimal Bayesian decoder. This readout scheme requires persistence over different timescales, depending on the grid cell module. Thus, we propose that the brain may employ an efficient representation of position which takes advantage of the spatiotemporal statistics of the encoded variable, in similarity to the principles that govern early sensory processing. PMID:28628647
Two Different Views on the World Around Us: The World of Uniformity versus Diversity.
Kwon, JaeHwan; Nayakankuppam, Dhananjay
2016-01-01
We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of "uniformity." As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of "diversity," such that they "hesitate" to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities.
Fields, Emma C; Melvani, Rakhi; Hajdok, George; D'Souza, David; Jones, Bernard; Stuhr, Kelly; Diot, Quentin; Fisher, Christine M; Mukhopadhyay, Nitai; Todor, Dorin
2017-09-01
When brachytherapy doses are reported or added, biologically effective dose (BED) minimum dose covering 90% of the volume (D90) is used as if dose is delivered uniformly to the target. Unlike BED(D90), equivalent uniform BED (EUBED) and generalized biologically equivalent uniform dose (gBEUD) are quantities that integrate dose inhomogeneity. Here we compared BED(D90) and equivalent uniform BED (EUBED)/gBEUD in 3 settings: (1) 2 sites using tandem and ovoid (T&O) but different styles of implants; (2) 2 sites using different devices-T&O and tandem and ring (T&R)-and different styles; and (3) the same site using T&O and T&R with the same style. EUBED and gBEUD were calculated for 260 fractions from 3 institutions using BED(α/β = 10 Gy). EUBED uses an extra parameter α with smaller values associated with radioresistant tumors. Similarly, gBEUD uses a, which places variable emphasis on hot/cold spots. Distributions were compared using the Kolmogorov-Smirnoff test at 5% significance. For the 2 sites using T&O, the distribution of EUBED-BED(D90) was not different for values of α = 0.5 to 0.3 Gy -1 but was statistically different for values of α = 0.15 to 0.05 Gy -1 (P=.01, .002). The mean percentage differences between EUBED and BED(D90) ranged from 20% to 100% for α = 0.5 Gy -1 to 0.05 Gy -1 . Using gBEUD-BED(D90), the P values indicate the distributions to be similar for a = -10 but to be significantly different for other values of a (-5, -1, 1). Between sites and at the same site using T&O versus T&R, the distributions were statistically different with EUBED/gBEUD irrespective of parameter values at which these quantities were computed. These differences indicate that EUBED/gBEUD capture differences between the techniques and applicators that are not detected by the BED(D90). BED(D90) is unable to distinguish between plans created by different devices or optimized differently. EUBED/gBEUD distinguish between dose distributions created by different devices and styles of implant and planning. This discrepancy is particularly important with the increased use of magnetic resonance imaging and hybrid devices, whereby one has the ability to create dose distributions that are significant departures from the classic pear. Copyright © 2017 Elsevier Inc. All rights reserved.
2015-03-26
universal definition” (Evans & Lindsay, 1996). Heizer and Render (2010) argue that several definitions of this term are user-based, meaning, that quality...for example, really good ice cream has high butterfat levels.” ( Heizer & Render , 2010). Garvin, in his Competing in Eight Dimensions of Quality...Montgomery, 2005). As for definition purposes, the concept adopted by this research was provided by Heizer and Render (2010), for whom Statistical Process
NASA Astrophysics Data System (ADS)
Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde
2018-03-01
This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, J; Christianson, O; Samei, E
Purpose: Flood-field uniformity evaluation is an essential element in the assessment of nuclear medicine (NM) gamma cameras. It serves as the central element of the quality control (QC) program, acquired and analyzed on a daily basis prior to clinical imaging. Uniformity images are traditionally analyzed using pixel value-based metrics which often fail to capture subtle structure and patterns caused by changes in gamma camera performance requiring additional visual inspection which is subjective and time demanding. The goal of this project was to develop and implement a robust QC metrology for NM that is effective in identifying non-uniformity issues, reporting issuesmore » in a timely manner for efficient correction prior to clinical involvement, all incorporated into an automated effortless workflow, and to characterize the program over a two year period. Methods: A new quantitative uniformity analysis metric was developed based on 2D noise power spectrum metrology and confirmed based on expert observer visual analysis. The metric, termed Structured Noise Index (SNI) was then integrated into an automated program to analyze, archive, and report on daily NM QC uniformity images. The effectiveness of the program was evaluated over a period of 2 years. Results: The SNI metric successfully identified visually apparent non-uniformities overlooked by the pixel valuebased analysis methods. Implementation of the program has resulted in nonuniformity identification in about 12% of daily flood images. In addition, due to the vigilance of staff response, the percentage of days exceeding trigger value shows a decline over time. Conclusion: The SNI provides a robust quantification of the NM performance of gamma camera uniformity. It operates seamlessly across a fleet of multiple camera models. The automated process provides effective workflow within the NM spectra between physicist, technologist, and clinical engineer. The reliability of this process has made it the preferred platform for NM uniformity analysis.« less
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
Pohl, Lydia; Kölbl, Angelika; Werner, Florian; Mueller, Carsten W; Höschen, Carmen; Häusler, Werner; Kögel-Knabner, Ingrid
2018-04-30
Aluminium (Al)-substituted goethite is ubiquitous in soils and sediments. The extent of Al-substitution affects the physicochemical properties of the mineral and influences its macroscale properties. Bulk analysis only provides total Al/Fe ratios without providing information with respect to the Al-substitution of single minerals. Here, we demonstrate that nanoscale secondary ion mass spectrometry (NanoSIMS) enables the precise determination of Al-content in single minerals, while simultaneously visualising the variation of the Al/Fe ratio. Al-substituted goethite samples were synthesized with increasing Al concentrations of 0.1, 3, and 7 % and analysed by NanoSIMS in combination with established bulk spectroscopic methods (XRD, FTIR, Mössbauer spectroscopy). The high spatial resolution (50-150 nm) of NanoSIMS is accompanied by a high number of single-point measurements. We statistically evaluated the Al/Fe ratios derived from NanoSIMS, while maintaining the spatial information and reassigning it to its original localization. XRD analyses confirmed increasing concentration of incorporated Al within the goethite structure. Mössbauer spectroscopy revealed 11 % of the goethite samples generated at high Al concentrations consisted of hematite. The NanoSIMS data show that the Al/Fe ratios are in agreement with bulk data derived from total digestion and demonstrated small spatial variability between single-point measurements. More advantageously, statistical analysis and reassignment of single-point measurements allowed us to identify distinct spots with significantly higher or lower Al/Fe ratios. NanoSIMS measurements confirmed the capacity to produce images, which indicated the uniform increase in Al-concentrations in goethite. Using a combination of statistical analysis with information from complementary spectroscopic techniques (XRD, FTIR and Mössbauer spectroscopy) we were further able to reveal spots with lower Al/Fe ratios as hematite. Copyright © 2018 John Wiley & Sons, Ltd.
Rapid learning of visual ensembles.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-02-01
We recently demonstrated that observers are capable of encoding not only summary statistics, such as mean and variance of stimulus ensembles, but also the shape of the ensembles. Here, for the first time, we show the learning dynamics of this process, investigate the possible priors for the distribution shape, and demonstrate that observers are able to learn more complex distributions, such as bimodal ones. We used speeding and slowing of response times between trials (intertrial priming) in visual search for an oddly oriented line to assess internal models of distractor distributions. Experiment 1 demonstrates that two repetitions are sufficient for enabling learning of the shape of uniform distractor distributions. In Experiment 2, we compared Gaussian and uniform distractor distributions, finding that following only two repetitions Gaussian distributions are represented differently than uniform ones. Experiment 3 further showed that when distractor distributions are bimodal (with a 30° distance between two uniform intervals), observers initially treat them as uniform, and only with further repetitions do they begin to treat the distributions as bimodal. In sum, observers do not have strong initial priors for distribution shapes and quickly learn simple ones but have the ability to adjust their representations to more complex feature distributions as information accumulates with further repetitions of the same distractor distribution.
14 CFR Sec. 19-2 - Maintenance of data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... operations are reported by the air carrier in operational control of the aircraft. The traffic moving under... shall maintain its operating statistics, covering the movement of traffic in accordance with the uniform classifications prescribed. Codes are prescribed for each operating element and service class. All traffic...
20 CFR 437.42 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-04-01
.... 437.42 Section 437.42 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and other records of grantees or subgrantees that are: (i) Required to be maintained by... (such as computer usage chargeback rates or composite fringe benefit rates). (i) If submitted for...
20 CFR 437.42 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-04-01
.... 437.42 Section 437.42 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and other records of grantees or subgrantees that are: (i) Required to be maintained by... (such as computer usage chargeback rates or composite fringe benefit rates). (i) If submitted for...
20 CFR 437.42 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-04-01
.... 437.42 Section 437.42 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and other records of grantees or subgrantees that are: (i) Required to be maintained by... (such as computer usage chargeback rates or composite fringe benefit rates). (i) If submitted for...
7 CFR 1126.62 - Announcement of producer prices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... following prices and information: (a) The producer price differential; (b) The protein price; (c) The nonfat...; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the...
7 CFR 1032.62 - Announcement of producer prices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1126.62 - Announcement of producer prices.
Code of Federal Regulations, 2013 CFR
2013-01-01
... following prices and information: (a) The producer price differential; (b) The protein price; (c) The nonfat...; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the...
7 CFR 1030.62 - Announcement of producer prices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, nonfat solids, protein and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1033.62 - Announcement of producer prices.
Code of Federal Regulations, 2012 CFR
2012-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1032.62 - Announcement of producer prices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1033.62 - Announcement of producer prices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1126.62 - Announcement of producer prices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... following prices and information: (a) The producer price differential; (b) The protein price; (c) The nonfat...; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the...
7 CFR 1032.62 - Announcement of producer prices.
Code of Federal Regulations, 2011 CFR
2011-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1126.62 - Announcement of producer prices.
Code of Federal Regulations, 2011 CFR
2011-01-01
... following prices and information: (a) The producer price differential; (b) The protein price; (c) The nonfat...; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the...
7 CFR 1030.62 - Announcement of producer prices.
Code of Federal Regulations, 2012 CFR
2012-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, nonfat solids, protein and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1033.62 - Announcement of producer prices.
Code of Federal Regulations, 2011 CFR
2011-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1033.62 - Announcement of producer prices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1126.62 - Announcement of producer prices.
Code of Federal Regulations, 2012 CFR
2012-01-01
... following prices and information: (a) The producer price differential; (b) The protein price; (c) The nonfat...; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the...
7 CFR 1032.62 - Announcement of producer prices.
Code of Federal Regulations, 2012 CFR
2012-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1032.62 - Announcement of producer prices.
Code of Federal Regulations, 2013 CFR
2013-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1030.62 - Announcement of producer prices.
Code of Federal Regulations, 2011 CFR
2011-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, nonfat solids, protein and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1030.62 - Announcement of producer prices.
Code of Federal Regulations, 2013 CFR
2013-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, nonfat solids, protein and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1030.62 - Announcement of producer prices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, nonfat solids, protein and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
7 CFR 1033.62 - Announcement of producer prices.
Code of Federal Regulations, 2013 CFR
2013-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... cell adjustment rate; (g) The average butterfat, protein, nonfat solids, and other solids content of producer milk; and (h) The statistical uniform price for milk containing 3.5 percent butterfat, computed by...
REVIEW OF THE ATTRIBUTES AND PERFORMANCE OF SIX URBAN DIFFUSION MODELS
The American Meteorological Society conducted a scientific review of a set of six urban diffusion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of statistics for all the models. The report consists of a summary and copies of the three independ...
Saddle-shaped mitral valve annuloplasty rings experience lower forces compared with flat rings.
Jensen, Morten O; Jensen, Henrik; Smerup, Morten; Levine, Robert A; Yoganathan, Ajit P; Nygaard, Hans; Hasenkam, J Michael; Nielsen, Sten L
2008-09-30
New insight into the 3D dynamic behavior of the mitral valve has prompted a reevaluation of annuloplasty ring designs. Force balance analysis indicates correlation between annulus forces and stresses in leaflets and chords. Improving this stress distribution can intuitively enhance the durability of mitral valve repair. We tested the hypothesis that saddle-shaped annuloplasty rings have superior uniform systolic force distribution compared with a nonuniform force distribution in flat annuloplasty rings. Sixteen 80-kg pigs had a flat (n=8) or saddle-shaped (n=8) mitral annuloplasty ring implanted. Mitral annulus 3D dynamic geometry was obtained with sonomicrometry before ring insertion. Strain gauges mounted on dedicated D-shaped rigid flat and saddle-shaped annuloplasty rings provided the intraoperative force distribution perpendicular to the annular plane. Average systolic annular height to commissural width ratio before ring implantation was 14.0%+/-1.6%. After flat and saddle shaped ring implantation, the annulus was fixed in the diastolic (9.0%+/-1.0%) and systolic (14.3%+/-1.3%) configuration, respectively (P<0.01). Force accumulation was seen from the anterior (0.72N+/-0.14N) and commissural annular segments (average 1.38N+/-0.27N) of the flat rings. In these segments, the difference between the 2 types of rings was statistically significant (P<0.05). The saddle-shaped annuloplasty rings did not experience forces statistically significantly larger than zero in any annular segments. Saddle-shaped annuloplasty rings provide superior uniform annular force distribution compared to flat rings and appear to represent a configuration that minimizes out-of-plane forces that could potentially be transmitted to leaflets and chords. This may have important implications for annuloplasty ring selections.
Noordin, Mohamed I; Chung, L Y
2004-01-01
This study adopts Differential Scanning Calorimetry (DSC) to analyze the thermal properties of samples (2.5-4.0 mg) from the tip, middle, and base sections of individual paracetamol suppositories, which were sampled carefully using a stainless steel scalpel. The contents of paracetamol present in the samples obtained from these sections were determined from the enthalpies of fusion of paracetamol and expressed as % w/w paracetamol to allow comparison of the amount of paracetamol found in each section. The tip, middle, and base sections contained 10.1+/-0.2%, 10.1+/-0.2%, and 10.3+/-0.2% w/w paracetamol, and are statistically similar (One-way anova; p>0.05). This indicates that the preparation technique adopted produces high quality suppositories in terms of content uniformity. The contents of paracetamol in the 120-mg paracetamol suppositories determined by DSC and UV spectrophotometry were statistically equivalent (Students's t-test; p>0.05), 120.8+/-2.6 mg and 120.8+/-1.5 mg, respectively, making DSC a clear alternative method for the measurement of content of drug in suppositories. The main advantages of the method are that samples of only 2.5-4.0 mg are required and the procedure does not require an extraction process, which allows for the analysis to be completed rapidly. In addition, it is highly sensitive and reproducible, with the lower detection limit at 4.0% w/w paracetamol, which is about 2.5 times lower than the content of paracetamol (10% w/w) present in our 120-mg paracetamol suppositories and commercial paracetamol suppositories, which contained about 125 mg paracetamol. Therefore, this method is particularly suited for determination of content uniformity in individual suppositories in quality control (QC) and in process quality control (PQC).
Dressing Diversity: Politics of Difference and the Case of School Uniforms
ERIC Educational Resources Information Center
Deane, Samantha
2015-01-01
Through an analysis of school uniform policies and theories of social justice, Samantha Deane argues that school uniforms and their foregoing policies assume that confronting strangers--an imperative of living in a democratic polity--is something that requires seeing sameness instead of recognizing difference. Imbuing schooling with a directive…
Regional-scale analysis of extreme precipitation from short and fragmented records
NASA Astrophysics Data System (ADS)
Libertino, Andrea; Allamano, Paola; Laio, Francesco; Claps, Pierluigi
2018-02-01
Rain gauge is the oldest and most accurate instrument for rainfall measurement, able to provide long series of reliable data. However, rain gauge records are often plagued by gaps, spatio-temporal discontinuities and inhomogeneities that could affect their suitability for a statistical assessment of the characteristics of extreme rainfall. Furthermore, the need to discard the shorter series for obtaining robust estimates leads to ignore a significant amount of information which can be essential, especially when large return periods estimates are sought. This work describes a robust statistical framework for dealing with uneven and fragmented rainfall records on a regional spatial domain. The proposed technique, named "patched kriging" allows one to exploit all the information available from the recorded series, independently of their length, to provide extreme rainfall estimates in ungauged areas. The methodology involves the sequential application of the ordinary kriging equations, producing a homogeneous dataset of synthetic series with uniform lengths. In this way, the errors inherent to any regional statistical estimation can be easily represented in the spatial domain and, possibly, corrected. Furthermore, the homogeneity of the obtained series, provides robustness toward local artefacts during the parameter-estimation phase. The application to a case study in the north-western Italy demonstrates the potential of the methodology and provides a significant base for discussing its advantages over previous techniques.
Human Fear Chemosignaling: Evidence from a Meta-Analysis.
de Groot, Jasper H B; Smeets, Monique A M
2017-10-01
Alarm pheromones are widely used in the animal kingdom. Notably, there are 26 published studies (N = 1652) highlighting a human capacity to communicate fear, stress, and anxiety via body odor from one person (66% males) to another (69% females). The question is whether the findings of this literature reflect a true effect, and what the average effect size is. These questions were answered by combining traditional meta-analysis with novel meta-analytical tools, p-curve analysis and p-uniform-techniques that could indicate whether findings are likely to reflect a true effect based on the distribution of P-values. A traditional random-effects meta-analysis yielded a small-to-moderate effect size (Hedges' g: 0.36, 95% CI: 0.31-0.41), p-curve analysis showed evidence diagnostic of a true effect (ps < 0.0001), and there was no evidence for publication bias. This meta-analysis did not assess the internal validity of the current studies; yet, the combined results illustrate the statistical robustness of a field in human olfaction dealing with the human capacity to communicate certain emotions (fear, stress, anxiety) via body odor. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
The Galactic Isotropic γ-ray Background and Implications for Dark Matter
NASA Astrophysics Data System (ADS)
Campbell, Sheldon S.; Kwa, Anna; Kaplinghat, Manoj
2018-06-01
We present an analysis of the radial angular profile of the galacto-isotropic (GI) γ-ray flux-the statistically uniform flux in angular annuli centred on the Galactic centre. Two different approaches are used to measure the GI flux profile in 85 months of Fermi-LAT data: the BDS statistical method which identifies spatial correlations, and a new Poisson ordered-pixel method which identifies non-Poisson contributions. Both methods produce similar GI flux profiles. The GI flux profile is well-described by an existing model of bremsstrahlung, π0 production, inverse Compton scattering, and the isotropic background. Discrepancies with data in our full-sky model are not present in the GI component, and are therefore due to mis-modelling of the non-GI emission. Dark matter annihilation constraints based solely on the observed GI profile are close to the thermal WIMP cross section below 100 GeV, for fixed models of the dark matter density profile and astrophysical γ-ray foregrounds. Refined measurements of the GI profile are expected to improve these constraints by a factor of a few.
Improving Non-Destructive Concrete Strength Tests Using Support Vector Machines
Shih, Yi-Fan; Wang, Yu-Ren; Lin, Kuo-Liang; Chen, Chin-Wen
2015-01-01
Non-destructive testing (NDT) methods are important alternatives when destructive tests are not feasible to examine the in situ concrete properties without damaging the structure. The rebound hammer test and the ultrasonic pulse velocity test are two popular NDT methods to examine the properties of concrete. The rebound of the hammer depends on the hardness of the test specimen and ultrasonic pulse travelling speed is related to density, uniformity, and homogeneity of the specimen. Both of these two methods have been adopted to estimate the concrete compressive strength. Statistical analysis has been implemented to establish the relationship between hammer rebound values/ultrasonic pulse velocities and concrete compressive strength. However, the estimated results can be unreliable. As a result, this research proposes an Artificial Intelligence model using support vector machines (SVMs) for the estimation. Data from 95 cylinder concrete samples are collected to develop and validate the model. The results show that combined NDT methods (also known as SonReb method) yield better estimations than single NDT methods. The results also show that the SVMs model is more accurate than the statistical regression model. PMID:28793627
Statistical analysis of texture in trunk images for biometric identification of tree species.
Bressane, Adriano; Roveda, José A F; Martins, Antônio C G
2015-04-01
The identification of tree species is a key step for sustainable management plans of forest resources, as well as for several other applications that are based on such surveys. However, the present available techniques are dependent on the presence of tree structures, such as flowers, fruits, and leaves, limiting the identification process to certain periods of the year. Therefore, this article introduces a study on the application of statistical parameters for texture classification of tree trunk images. For that, 540 samples from five Brazilian native deciduous species were acquired and measures of entropy, uniformity, smoothness, asymmetry (third moment), mean, and standard deviation were obtained from the presented textures. Using a decision tree, a biometric species identification system was constructed and resulted to a 0.84 average precision rate for species classification with 0.83accuracy and 0.79 agreement. Thus, it can be considered that the use of texture presented in trunk images can represent an important advance in tree identification, since the limitations of the current techniques can be overcome.
ERIC Educational Resources Information Center
Casoli-Reardon, Michele; Rappaport, Nancy; Kulick, Deborah; Reinfeld, Sarah
2012-01-01
School truancy--defined by a student's refusal to attend part or all of the school day, along with a defined number of unexcused absences--is an increasingly frustrating and complex problem for teachers and school administrators. Although statistics on the prevalence of truancy in the United States do not exist due to lack of uniformity among…
20 CFR 435.53 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-04-01
.... 435.53 Section 435.53 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and all other records pertinent to an award must be retained for a period of three years... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
20 CFR 435.53 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-04-01
.... 435.53 Section 435.53 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and all other records pertinent to an award must be retained for a period of three years... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
20 CFR 435.53 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-04-01
.... 435.53 Section 435.53 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE..., statistical records, and all other records pertinent to an award must be retained for a period of three years... computations of the rate at which a particular group of costs is chargeable (such as computer usage chargeback...
7 CFR 1124.62 - Announcement of producer prices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... butterfat, protein, nonfat solids, and other solids content of producer milk; and (g) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the Class III price and the...
7 CFR 1124.62 - Announcement of producer prices.
Code of Federal Regulations, 2012 CFR
2012-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... butterfat, protein, nonfat solids, and other solids content of producer milk; and (g) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the Class III price and the...
7 CFR 1124.62 - Announcement of producer prices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... butterfat, protein, nonfat solids, and other solids content of producer milk; and (g) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the Class III price and the...
7 CFR 1124.62 - Announcement of producer prices.
Code of Federal Regulations, 2013 CFR
2013-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... butterfat, protein, nonfat solids, and other solids content of producer milk; and (g) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the Class III price and the...
7 CFR 1124.62 - Announcement of producer prices.
Code of Federal Regulations, 2011 CFR
2011-01-01
... publicly the following prices and information: (a) The producer price differential; (b) The protein price... butterfat, protein, nonfat solids, and other solids content of producer milk; and (g) The statistical uniform price for milk containing 3.5 percent butterfat, computed by combining the Class III price and the...
Content Analysis of Measures for Identification of Elder Abuse.
ERIC Educational Resources Information Center
Sengstock, Mary C.; And Others
Measures designed to detect elder abuse lack uniformity as a result of having been designed in isolation. To develop and test a uniform index for the identification of elder abuse victims, an analysis of existing abuse identification instruments was conducted. Initially, seven elder abuse identification measures were content analyzed, resulting in…
Aneurysm permeability following coil embolization: packing density and coil distribution
Chueh, Ju-Yu; Vedantham, Srinivasan; Wakhloo, Ajay K; Carniato, Sarena L; Puri, Ajit S; Bzura, Conrad; Coffin, Spencer; Bogdanov, Alexei A; Gounis, Matthew J
2015-01-01
Background Rates of durable aneurysm occlusion following coil embolization vary widely, and a better understanding of coil mass mechanics is desired. The goal of this study is to evaluate the impact of packing density and coil uniformity on aneurysm permeability. Methods Aneurysm models were coiled using either Guglielmi detachable coils or Target coils. The permeability was assessed by taking the ratio of microspheres passing through the coil mass to those in the working fluid. Aneurysms containing coil masses were sectioned for image analysis to determine surface area fraction and coil uniformity. Results All aneurysms were coiled to a packing density of at least 27%. Packing density, surface area fraction of the dome and neck, and uniformity of the dome were significantly correlated (p<0.05). Hence, multivariate principal components-based partial least squares regression models were used to predict permeability. Similar loading vectors were obtained for packing and uniformity measures. Coil mass permeability was modeled better with the inclusion of packing and uniformity measures of the dome (r2=0.73) than with packing density alone (r2=0.45). The analysis indicates the importance of including a uniformity measure for coil distribution in the dome along with packing measures. Conclusions A densely packed aneurysm with a high degree of coil mass uniformity will reduce permeability. PMID:25031179
Oliva, Alexis; Fariña, José B; Llabrés, Matías
2013-10-15
A simple and reproducible UPLC method was developed and validated for the quantitative analysis of finasteride in low-dose drug products. Method validation demonstrated the reliability and consistency of analytical results. Due to the regulatory requirements of pharmaceutical analysis in particular, evaluation of robustness is vital to predict how small variations in operating conditions affect the responses. Response surface methodology as an optimization technique was used to evaluate the robustness. For this, a central composite design was implemented around the nominal conditions. Statistical treatment of the responses (retention factor and drug concentrations expressed as percentage of label claim) showed that methanol content in mobile-phase and flow rate were the most influential factors. In the optimization process, the compromise decision support problem (cDSP) strategy was used. Construction of the robust domain from response-surfaces provided tolerance windows for the factors affecting the effectiveness of the method. The specified limits for the USP uniformity of dosage units assay (98.5-101.5%) and the purely experimental variations based on the repeatability test for center points (nominal conditions repetitions) were used as criteria to establish the tolerance windows, which allowed definition design space (DS) of analytical method. Thus, the acceptance criteria values (AV) proposed by the USP-uniformity of assay only depend on the sampling error. If the variation in the responses corresponded to approximately twice the repeatability standard deviation, individual values for percentage label claim (%LC) response may lie outside the specified limits; this implies the data are not centered between the specified limits, and that this term plus the sampling error affects the AV value. To avoid this fact, the limits specified by the Uniformity of Dosage Form assay (i.e., 98.5-101.5%) must be taken into consideration to fix the tolerance windows for each factor. All these results were verified by the Monte Carlo simulation. In conclusion, the level of variability for different factors must be calculated for each case, and not arbitrary way, provided a variation is found higher than the repeatability for center points and secondly, the %LC response must lie inside the specified limits i.e., 98.5-101.5%. If not the UPLC method must be re-developed. © 2013 Elsevier B.V. All rights reserved.
Air-flow distortion and turbulence statistics near an animal facility
NASA Astrophysics Data System (ADS)
Prueger, J. H.; Eichinger, W. E.; Hipps, L. E.; Hatfield, J. L.; Cooper, D. I.
The emission and dispersion of particulates and gases from concentrated animal feeding operations (CAFO) at local to regional scales is a current issue in science and society. The transport of particulates, odors and toxic chemical species from the source into the local and eventually regional atmosphere is largely determined by turbulence. Any models that attempt to simulate the dispersion of particles must either specify or assume various statistical properties of the turbulence field. Statistical properties of turbulence are well documented for idealized boundary layers above uniform surfaces. However, an animal production facility is a complex surface with structures that act as bluff bodies that distort the turbulence intensity near the buildings. As a result, the initial release and subsequent dispersion of effluents in the region near a facility will be affected by the complex nature of the surface. Previous Lidar studies of plume dispersion over the facility used in this study indicated that plumes move in complex yet organized patterns that would not be explained by the properties of turbulence generally assumed in models. The objective of this study was to characterize the near-surface turbulence statistics in the flow field around an array of animal confinement buildings. Eddy covariance towers were erected in the upwind, within the building array and downwind regions of the flow field. Substantial changes in turbulence intensity statistics and turbulence-kinetic energy (TKE) were observed as the mean wind flow encountered the building structures. Spectra analysis demonstrated unique distribution of the spectral energy in the vertical profile above the buildings.
Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B
2017-08-01
Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the algorithms, we developed a peak list simulator within our nmrstarlib package that generates user-defined assigned peak lists from a given BMRB entry or database of entries. In addition, over 100,000 simulated peak lists with one or two sources of variance were generated to evaluate the performance and robustness of these new registration analysis and peak grouping algorithms.
2015-06-01
UCAVs) may enhance Turkey’s ability to counter active terrorists in that region. In this research, Map Aware Non-uniform Automata (MANA) is used to...Aerial Vehicles (UCAVs) may enhance Turkey’s ability to counter active terrorists in that region. In this research, Map Aware Non-uniform Automata (MANA...Attack Munition LOS Line-of-Sight MALE Medium-Altitude Long-Endurance MANA Map Aware Non-Uniform Automata MANA-V Map Aware Non-Uniform Automata
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
Contact thermal shock test of ceramics
NASA Technical Reports Server (NTRS)
Rogers, W. P.; Emery, A. F.
1992-01-01
A novel quantitative thermal shock test of ceramics is described. The technique employs contact between a metal-cooling rod and hot disk-shaped specimen. In contrast with traditional techniques, the well-defined thermal boundary condition allows for accurate analyses of heat transfer, stress, and fracture. Uniform equibiaxial tensile stresses are induced in the center of the test specimen. Transient specimen temperature and acoustic emission are monitored continuously during the thermal stress cycle. The technique is demonstrated with soda-lime glass specimens. Experimental results are compared with theoretical predictions based on a finite-element method thermal stress analysis combined with a statistical model of fracture. Material strength parameters are determined using concentric ring flexure tests. Good agreement is found between experimental results and theoretical predictions of failure probability as a function of time and initial specimen temperature.
Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P
2014-01-01
Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.
Characterization of turbulent coherent structures in square duct flow
NASA Astrophysics Data System (ADS)
Atzori, Marco; Vinuesa, Ricardo; Lozano-Durán, Adrián; Schlatter, Philipp
2018-04-01
This work is aimed at a first characterization of coherent structures in turbulent square duct flows. Coherent structures are defined as connected components in the domain identified as places where a quantity of interest (such as Reynolds stress or vorticity) is larger than a prescribed non-uniform threshold. Firstly, we qualitatively discuss how a percolation analysis can be used to assess the effectiveness of the threshold function, and how it can be affected by statistical uncertainty. Secondly, various physical quantities that are expected to play an important role in the dynamics of the secondary flow of Prandtl’s second kind are studied. Furthermore, a characterization of intense Reynolds-stress events in square duct flow, together with a comparison of their shape for analogous events in channel flow at the same Reynolds number, is presented.
Turbulence Statistics of a Buoyant Jet in a Stratified Environment
NASA Astrophysics Data System (ADS)
McCleney, Amy Brooke
Using non-intrusive optical diagnostics, turbulence statistics for a round, incompressible, buoyant, and vertical jet discharging freely into a stably linear stratified environment is studied and compared to a reference case of a neutrally buoyant jet in a uniform environment. This is part of a validation campaign for computational fluid dynamics (CFD). Buoyancy forces are known to significantly affect the jet evolution in a stratified environment. Despite their ubiquity in numerous natural and man-made flows, available data in these jets are limited, which constrain our understanding of the underlying physical processes. In particular, there is a dearth of velocity field data, which makes it challenging to validate numerical codes, currently used for modeling these important flows. Herein, jet near- and far-field behaviors are obtained with a combination of planar laser induced fluorescence (PLIF) and multi-scale time-resolved particle image velocimetry (TR-PIV) for Reynolds number up to 20,000. Deploying non-intrusive optical diagnostics in a variable density environment is challenging in liquids. The refractive index is strongly affected by the density, which introduces optical aberrations and occlusions that prevent the resolution of the flow. One solution consists of using index matched fluids with different densities. Here a pair of water solutions - isopropanol and NaCl - are identified that satisfy these requirements. In fact, they provide a density difference up to 5%, which is the largest reported for such fluid pairs. Additionally, by design, the kinematic viscosities of the solutions are identical. This greatly simplifies the analysis and subsequent simulations of the data. The spectral and temperature dependence of the solutions are fully characterized. In the near-field, shear layer roll-up is analyzed and characterized as a function of initial velocity profile. In the far-field, turbulence statistics are reported for two different scales, one capturing the entire jet at near Taylor microscale resolution, and the other, thanks to the careful refractive index matching of the liquids, resolving the Taylor scale at near Kolmogorov scale resolution. This is accomplished using a combination of TR-PIV and long-distance micro-PIV. The turbulence statistics obtained at various downstream locations and magnifications are obtained for density differences of 0%, 1%, and 3%. To validate the experimental methodology and provide a reference case for validation, the effect of initial velocity profile on the neutrally buoyant jet in the self-preserving regime is studied at two Reynolds numbers of 10,000 and 20,000. For the neutrally buoyant jet, it is found that independent of initial conditions the jet follows a self-similar behavior in the far-field; however, the spreading rate is strongly dependent on initial velocity profile. High magnification analysis at the small turbulent length scales shows a flow field where the mean statistics compare well to the larger field of view case. Investigation of the near-field shows the jet is strongly influenced by buoyancy, where an increase in vortex ring formation frequency and number of pairings occur. The buoyant jet with a 1% density difference shows an alteration of the centerline velocity decay, but the radial distribution of the mean axial velocity collapses well at all measurement locations. Jet formation dramatically changes for a buoyant jet with a 3% density difference, where the jet reaches a terminal height and spreads out horizontally at its neutral buoyancy location. Analysis of both the mean axial velocity and strain rates show the jet is no longer self-similar; for example, the mean centerline velocity does not decay uniformly as the jet develops. The centerline strain rates at this density difference also show trends which are strongly influenced by the altered centerline velocity. The overall centerline analysis shows that turbulence suppression occurs as a result of the stratification for both the 1% and 3% density difference. Analysis on the kinetic energy budget shows that the mean convection, production, transportation, and dissipation of energy is altered from stratification. High resolution data of the jet enable flow structures to be captured in the neutrally buoyant region of the flow. Vortices of different sizes are identified. Longer data sets are necessary to perform a statistical analysis of their distribution and to compare them to homogeneous environment case. This multi-scale analysis shows potential for studying energy transfer between length scales.
Stakeholder Perceptions of Cyberbullying Cases: Application of the Uniform Definition of Bullying.
Moreno, Megan A; Suthamjariya, Nina; Selkie, Ellen
2018-04-01
The Uniform Definition of Bullying was developed to address bullying and cyberbullying, and to promote consistency in measurement and policy. The purpose of this study was to understand community stakeholder perceptions of typical cyberbullying cases, and to evaluate how these case descriptions align with the Uniform Definition. In this qualitative case analysis we recruited stakeholders commonly involved in cyberbullying. We used purposeful sampling to identify and recruit adolescents and young adults, parents, and professionals representing education and health care. Participants were asked to write a typical case of cyberbullying and descriptors in the context of a group discussion. We applied content analysis to case excerpts using inductive and deductive approaches, and chi-squared tests for mixed methods analyses. A total of 68 participants contributed; participants included 73% adults and 27% adolescents and young adults. A total of 650 excerpts were coded from participants' example cases and 362 (55.6%) were consistent with components of the Uniform Definition. The most frequently mentioned component of the Uniform Definition was Aggressive Behavior (n = 218 excerpts), whereas Repeated was mentioned infrequently (n = 19). Most participants included two to three components of the Uniform Definition within an example case; none of the example cases included all components of the Uniform Definition. We found that most participants described cyberbullying cases using few components of the Uniform Definition. Findings can be applied toward considering refinement of the Uniform Definition to ensure stakeholders find it applicable to cyberbullying. Copyright © 2017 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
O'Connor, Michael; Lee, Caroline; Ellens, Harma; Bentz, Joe
2015-02-01
Current USFDA and EMA guidance for drug transporter interactions is dependent on IC50 measurements as these are utilized in determining whether a clinical interaction study is warranted. It is therefore important not only to standardize transport inhibition assay systems but also to develop uniform statistical criteria with associated probability statements for generation of robust IC50 values, which can be easily adopted across the industry. The current work provides a quantitative examination of critical factors affecting the quality of IC50 fits for P-gp inhibition through simulations of perfect data with randomly added error as commonly observed in the large data set collected by the P-gp IC50 initiative. The types of errors simulated were (1) variability in replicate measures of transport activity; (2) transformations of error-contaminated transport activity data prior to IC50 fitting (such as performed when determining an IC50 for inhibition of P-gp based on efflux ratio); and (3) the lack of well defined "no inhibition" and "complete inhibition" plateaus. The effect of the algorithm used in fitting the inhibition curve (e.g., two or three parameter fits) was also investigated. These simulations provide strong quantitative support for the recommendations provided in Bentz et al. (2013) for the determination of IC50 values for P-gp and demonstrate the adverse effect of data transformation prior to fitting. Furthermore, the simulations validate uniform statistical criteria for robust IC50 fits in general, which can be easily implemented across the industry. A calibration of the t-statistic is provided through calculation of confidence intervals associated with the t-statistic.
O'Connor, Michael; Lee, Caroline; Ellens, Harma; Bentz, Joe
2015-01-01
Current USFDA and EMA guidance for drug transporter interactions is dependent on IC50 measurements as these are utilized in determining whether a clinical interaction study is warranted. It is therefore important not only to standardize transport inhibition assay systems but also to develop uniform statistical criteria with associated probability statements for generation of robust IC50 values, which can be easily adopted across the industry. The current work provides a quantitative examination of critical factors affecting the quality of IC50 fits for P-gp inhibition through simulations of perfect data with randomly added error as commonly observed in the large data set collected by the P-gp IC50 initiative. The types of errors simulated were (1) variability in replicate measures of transport activity; (2) transformations of error-contaminated transport activity data prior to IC50 fitting (such as performed when determining an IC50 for inhibition of P-gp based on efflux ratio); and (3) the lack of well defined “no inhibition” and “complete inhibition” plateaus. The effect of the algorithm used in fitting the inhibition curve (e.g., two or three parameter fits) was also investigated. These simulations provide strong quantitative support for the recommendations provided in Bentz et al. (2013) for the determination of IC50 values for P-gp and demonstrate the adverse effect of data transformation prior to fitting. Furthermore, the simulations validate uniform statistical criteria for robust IC50 fits in general, which can be easily implemented across the industry. A calibration of the t-statistic is provided through calculation of confidence intervals associated with the t-statistic. PMID:25692007
Beam uniformity analysis of infrared laser illuminators
NASA Astrophysics Data System (ADS)
Allik, Toomas H.; Dixon, Roberta E.; Proffitt, R. Patrick; Fung, Susan; Ramboyong, Len; Soyka, Thomas J.
2015-02-01
Uniform near-infrared (NIR) and short-wave infrared (SWIR) illuminators are desired in low ambient light detection, recognition, and identification of military applications. Factors that contribute to laser illumination image degradation are high frequency, coherent laser speckle and low frequency nonuniformities created by the laser or external laser cavity optics. Laser speckle analysis and beam uniformity improvements have been independently studied by numerous authors, but analysis to separate these two effects from a single measurement technique has not been published. In this study, profiles of compact, diode laser NIR and SWIR illuminators were measured and evaluated. Digital 12-bit images were recorded with a flat-field calibrated InGaAs camera with measurements at F/1.4 and F/16. Separating beam uniformity components from laser speckle was approximated by filtering the original image. The goal of this paper is to identify and quantify the beam quality variation of illumination prototypes, draw awareness to its impact on range performance modeling, and develop measurement techniques and methodologies for military, industry, and vendors of active sources.
Dyess AFB, Texas. Revised Uniform Summary of Surface Weather Observations (RUSSWO). Parts A-F.
1988-01-01
Observations (RUSSWO); Dyess AFB TX; Texas; Abilene TX; Army Airfield Abilene TX; USTX722665. 19 Abstract: A six-part statistical data summary of...ELAT. AND S TANDARD Di-V I AtIONjS PEEESNTCVIS [’j ,T INCLUDE INCOMPLETE MONTHS. FOUR OR MORE MONTHS ARE NEEDED TO ADMILTE THE SE STATISTIC S AND...TA L NLMMYt (,F OPSIRW8IONS: 93" 6LOfAL CLPUATOLOGV FRANC " PERCENTAGE FPEiUtICY OF OCCURRENCE OF SURFACE WIND DIRECTION VERSUS WIND SPEED LiSAF7 I
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
An analytical poroelastic model for ultrasound elastography imaging of tumors
NASA Astrophysics Data System (ADS)
Tauhidul Islam, Md; Chaudhry, Anuj; Unnikrishnan, Ginu; Reddy, J. N.; Righetti, Raffaella
2018-01-01
The mechanical behavior of biological tissues has been studied using a number of mechanical models. Due to the relatively high fluid content and mobility, many biological tissues have been modeled as poroelastic materials. Diseases such as cancers are known to alter the poroelastic response of a tissue. Tissue poroelastic properties such as compressibility, interstitial permeability and fluid pressure also play a key role for the assessment of cancer treatments and for improved therapies. At the present time, however, a limited number of poroelastic models for soft tissues are retrievable in the literature, and the ones available are not directly applicable to tumors as they typically refer to uniform tissues. In this paper, we report the analytical poroelastic model for a non-uniform tissue under stress relaxation. Displacement, strain and fluid pressure fields in a cylindrical poroelastic sample containing a cylindrical inclusion during stress relaxation are computed. Finite element simulations are then used to validate the proposed theoretical model. Statistical analysis demonstrates that the proposed analytical model matches the finite element results with less than 0.5% error. The availability of the analytical model and solutions presented in this paper may be useful to estimate diagnostically relevant poroelastic parameters such as interstitial permeability and fluid pressure, and, in general, for a better interpretation of clinically-relevant ultrasound elastography results.
Catalog of Oroville, California, earthquakes; June 7, 1975 to July 31, 1976
Mantis, Constance; Lindh, Allan; Savage, William; Marks, Shirley
1979-01-01
On August 1, 1975, at 2020 GMT a magnitude 5.7 (ML) earthquake occurred 15 km south of Oroville, California, in the western foothills of the Sierra Nevada. It was preceded by 61 foreshocks that began on June 7, 1975, and was followed by thousands of aftershocks. Several studies have reported locations or analyses of various subsets of the Oroville sequence, including Morrison and others (1975), Savage and others (1975), Lester and others (1975), Toppozada and others (1975), Ryall and others (1975), Bufe and others (1976), Morrison and others (1976), and Lahr and others (1976). In this report arrival time data have been compiled from the original records at several institutions to produce a single catalog of the Oroville sequence from June 7, 1975, through July 31, 1976. This study has four objectives: to compile a list of earthquakes in the Oroville sequence that is as complete as possible above the minimum magnitude threshold of approximately 1.0;to determine accurate and uniform hypocentral coordinates for the earthquakes;to determine reliable and consistent magnitude values for the sequence; andto provide a statistically uniform basis for further investigation of the physical processes involved in the Oroville sequence as revealed by the parameters of the foreshocks and aftershocks.The basis and procedures for the data analysis are described in this report.
Almasi, Sepideh; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L; Xu, Xiaoyin
2017-03-01
To simultaneously overcome the challenges imposed by the nature of optical imaging characterized by a range of artifacts including space-varying signal to noise ratio (SNR), scattered light, and non-uniform illumination, we developed a novel method that segments the 3-D vasculature directly from original fluorescence microscopy images eliminating the need for employing pre- and post-processing steps such as noise removal and segmentation refinement as used with the majority of segmentation techniques. Our method comprises two initialization and constrained recovery and enhancement stages. The initialization approach is fully automated using features derived from bi-scale statistical measures and produces seed points robust to non-uniform illumination, low SNR, and local structural variations. This algorithm achieves the goal of segmentation via design of an iterative approach that extracts the structure through voting of feature vectors formed by distance, local intensity gradient, and median measures. Qualitative and quantitative analysis of the experimental results obtained from synthetic and real data prove the effcacy of this method in comparison to the state-of-the-art enhancing-segmenting methods. The algorithmic simplicity, freedom from having a priori probabilistic information about the noise, and structural definition gives this algorithm a wide potential range of applications where i.e. structural complexity significantly complicates the segmentation problem.
Dimensional Analysis and Electric Potential Due to a Uniformly Charged Sheet
ERIC Educational Resources Information Center
Aghamohammadi, Amir
2011-01-01
Dimensional analysis, superposition principle, and continuity of electric potential are used to study the electric potential of a uniformly charged square sheet on its plane. It is shown that knowing the electric potential on the diagonal and inside the square sheet is equivalent to knowing it everywhere on the plane of the square sheet. The…
In-plane stability analysis of non-uniform cross-sectioned curved beams
NASA Astrophysics Data System (ADS)
Öztürk, Hasan; Yeşilyurt, İsa; Sabuncu, Mustafa
2006-09-01
In this study, in-plane stability analysis of non-uniform cross-sectioned thin curved beams under uniformly distributed dynamic loads is investigated by using the Finite Element Method. The first and second unstable regions are examined for dynamic stability. In-plane vibration and in-plane buckling are also studied. Two different finite element models, representing variations of cross-section, are developed by using simple strain functions in the analysis. The results obtained from this study are compared with the results of other investigators in existing literature for the fundamental natural frequency and critical buckling load. The effects of opening angle, variations of cross-section, static and dynamic load parameters on the stability regions are shown in graphics.
King, Christopher R
2016-11-01
To date neither the optimal radiotherapy dose nor the existence of a dose-response has been established for salvage RT (SRT). A systematic review from 1996 to 2015 and meta-analysis was performed to identify the pathologic, clinical and treatment factors associated with relapse-free survival (RFS) after SRT (uniformly defined as a PSA>0.2ng/mL or rising above post-SRT nadir). A sigmoidal dose-response curve was objectively fitted and a non-parametric statistical test used to determine significance. 71 studies (10,034 patients) satisfied the meta-analysis criteria. SRT dose (p=0.0001), PSA prior to SRT (p=0.0009), ECE+ (p=0.039) and SV+ (p=0.046) had significant associations with RFS. Statistical analyses confirmed the independence of SRT dose-response. Omission of series with ADT did not alter results. Dose-response is well fit by a sigmoidal curve (p=0.0001) with a TCD 50 of 65.8Gy, with a dose of 70Gy achieving 58.4% RFS vs. 38.5% for 60Gy. A 2.0% [95% CI 1.1-3.2] improvement in RFS is achieved for each Gy. The SRT dose-response remarkably parallels that for definitive RT of localized disease. This study provides level 2a evidence for dose-escalated SRT>70Gy. The presence of an SRT dose-response for microscopic disease supports the hypothesis that prostate cancer is inherently radio-resistant. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Selectable light-sheet uniformity using tuned axial scanning
Duocastella, Martí; Arnold, Craig B.; Puchalla, Jason
2016-01-01
Light-sheet fluorescence microscopy (LSFM) is an optical sectioning technique capable of rapid three-dimensional (3D) imaging of a wide range of specimens with reduced phototoxicity and superior background rejection. However, traditional light-sheet generation approaches based on elliptical or circular Gaussian beams suffer an inherent trade-off between light-sheet thickness and area over which this thickness is preserved. Recently, an increase in light-sheet uniformity was demonstrated using rapid biaxial Gaussian beam scanning along the lateral and beam propagation directions. Here we apply a similar scanning concept to an elliptical beam generated by a cylindrical lens. In this case, only z-scanning of the elliptical beam is required and hence experimental implementation of the setup can be simplified. We introduce a simple dimensionless uniformity statistic to better characterize scanned light-sheets and experimentally demonstrate custom tailored uniformities up to a factor of 5 higher than those of un-scanned elliptical beams. This technique offers a straightforward way to generate and characterize a custom illumination profile that provides enhanced utilization of the detector dynamic range and field of view, opening the door to faster and more efficient 2D and 3D imaging. PMID:28132409
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bamberger, Judith A.; Enderlin, Carl W.
Million-gallon double-shell tanks at Hanford are used to store transuranic, high-level, and low-level radioactive wastes. These wastes consist of a large volume of salt-laden solution covering a smaller volume of settled sludge primarily containing metal hydroxides. These wastes will be retrieved and processed into immobile waste forms suitable for permanent disposal. Retrieval is an important step in implementing these disposal scenarios. The retrieval concept evaluated is to use submerged dual-nozzle jet mixer pumps with horizontally oriented nozzles located near the tank floor that produce horizontal jets of fluid to mobilize the settled solids. The mixer pumps are oscillated through 180more » about a vertical axis so the high velocity fluid jets sweep across the floor of the tank. After the solids are mobilized, the pumps will continue to operate at a reduced flow rate producing lower velocity jets sufficient to maintain the particles in a uniform suspension (concentration uniformity). Several types of waste and tank configurations exist at Hanford. The jet mixer pump systems and operating conditions required to mobilize sludge and maintain slurry uniformity will be a function of the waste type and tank configuration. The focus of this work was to conduct a 1/12-scale experiment to develop an analytical model to relate slurry uniformity to tank and mixer pump configurations, operating conditions, and sludge properties. This experimental study evaluated concentration uniformity in a 1/12-scale experiment varying the Reynolds number (Re), Froude number (Fr), and gravitational settling parameter (Gs) space. Simulant physical properties were chosen to obtain the required Re and Gs where Re and Gs were varied by adjusting the kinematic viscosity and mean particle diameter, respectively. Test conditions were achieved by scaling the jet nozzle exit velocity in a 75-in. diameter tank using a mock-up of a centrally located dual-opposed jet mixer pump located just above the tank floor. Concentration measurements at sampling locations throughout the tank were used to assess the degree of uniformity achieved during each test. Concentration data was obtained using a real time in-situ ultrasonic attenuation probe and post-test analysis of discrete batch samples. The undissolved solids concentration at these locations was analyzed to determine whether the tank contents were uniform (≤ ±10% variation about mean) or nonuniform (> ±10% variation about mean) in concentration. Concentration inhomogeneity was modeled as a function of dimensionless parameters. The parameters that best describe the maximum solids volume fraction that can be suspended were found to be 1) the Fr based on nozzle average discharge velocity and tank contents level and 2) the dimensionless particle size based on nozzle diameter. The dependence on the jet Re does not appear to be statistically significant.« less
Cost Finding Principles and Procedures. Preliminary Field Review Edition. Technical Report 26.
ERIC Educational Resources Information Center
Ziemer, Gordon; And Others
This report is part of the Larger Cost Finding Principles Project designed to develop a uniform set of standards, definitions, and alternative procedures that will use accounting and statistical data to find the full cost of resources utilized in the process of producing institutional outputs. This technical report describes preliminary procedures…
Children and Their Families; A Statistical Profile.
ERIC Educational Resources Information Center
Ahmed, Naim; And Others
The purpose of this book is to provide a uniform and current (1977) data base for persons involved in planning and administering human services programs in South Carolina. General statewide demographic data are given first in a brief introduction. Then, each of the state's counties is profiled in terms of general demographic characteristics,…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... of the United States (``HTSUS'') statistical reporting numbers 7306.19.1010, 7306.19.1050, 7306.19... uniform except for the existence of ``introductory'' rates in certain zones. Because Al Jazeera has been located in Sohar Industrial Estate beyond any ``introductory'' period in the other industrial estates, it...
UniEnt: uniform entropy model for the dynamics of a neuronal population
NASA Astrophysics Data System (ADS)
Hernandez Lahme, Damian; Nemenman, Ilya
Sensory information and motor responses are encoded in the brain in a collective spiking activity of a large number of neurons. Understanding the neural code requires inferring statistical properties of such collective dynamics from multicellular neurophysiological recordings. Questions of whether synchronous activity or silence of multiple neurons carries information about the stimuli or the motor responses are especially interesting. Unfortunately, detection of such high order statistical interactions from data is especially challenging due to the exponentially large dimensionality of the state space of neural collectives. Here we present UniEnt, a method for the inference of strengths of multivariate neural interaction patterns. The method is based on the Bayesian prior that makes no assumptions (uniform a priori expectations) about the value of the entropy of the observed multivariate neural activity, in contrast to popular approaches that maximize this entropy. We then study previously published multi-electrode recordings data from salamander retina, exposing the relevance of higher order neural interaction patterns for information encoding in this system. This work was supported in part by Grants JSMF/220020321 and NSF/IOS/1208126.
Two Different Views on the World Around Us: The World of Uniformity versus Diversity
Nayakankuppam, Dhananjay
2016-01-01
We propose that when individuals believe in fixed traits of personality (entity theorists), they are likely to expect a world of “uniformity.” As such, they easily infer a population statistic from a small sample of data with confidence. In contrast, individuals who believe in malleable traits of personality (incremental theorists) are likely to presume a world of “diversity,” such that they “hesitate” to infer a population statistic from a similarly sized sample. In four laboratory experiments, we found that compared to incremental theorists, entity theorists estimated a population mean from a sample with a greater level of confidence (Studies 1a and 1b), expected more homogeneity among the entities within a population (Study 2), and perceived an extreme value to be more indicative of an outlier (Study 3). These results suggest that individuals are likely to use their implicit self-theory orientations (entity theory versus incremental theory) to see a population in general as a constitution either of homogeneous or heterogeneous entities. PMID:27977788
The Ciliate Paramecium Shows Higher Motility in Non-Uniform Chemical Landscapes
Giuffre, Carl; Hinow, Peter; Vogel, Ryan; Ahmed, Tanvir; Stocker, Roman; Consi, Thomas R.; Strickler, J. Rudi
2011-01-01
We study the motility behavior of the unicellular protozoan Paramecium tetraurelia in a microfluidic device that can be prepared with a landscape of attracting or repelling chemicals. We investigate the spatial distribution of the positions of the individuals at different time points with methods from spatial statistics and Poisson random point fields. This makes quantitative the informal notion of “uniform distribution” (or lack thereof). Our device is characterized by the absence of large systematic biases due to gravitation and fluid flow. It has the potential to be applied to the study of other aquatic chemosensitive organisms as well. This may result in better diagnostic devices for environmental pollutants. PMID:21494596
The uniform quantized electron gas revisited
NASA Astrophysics Data System (ADS)
Lomba, Enrique; Høye, Johan S.
2017-11-01
In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.
Human thermal sensation and comfort in a non-uniform environment with personalized heating.
Deng, Qihong; Wang, Runhuai; Li, Yuguo; Miao, Yufeng; Zhao, Jinping
2017-02-01
Thermal comfort in traditionally uniform environment is apparent and can be improved by increasing energy expenses. To save energy, non-uniform environment implemented by personalized conditioning system attracts considerable attention, but human response in such environment is unclear. To investigate regional- and whole-body thermal sensation and comfort in a cool environment with personalized heating. In total 36 subjects (17 males and 19 females) including children, adults and the elderly, were involved in our experiment. Each subject was first asked to sit on a seat in an 18°C chamber (uniform environment) for 40min and then sit on a heating seat in a 16°C chamber (non-uniform environment) for another 40min after 10min break. Subjects' regional- and whole-body thermal sensation and comfort were surveyed by questionnaire and their skin temperatures were measured by wireless sensors. We statistically analyzed subjects' thermal sensation and comfort and their skin temperatures in different age and gender groups and compared them between the uniform and non-uniform environments. Overall thermal sensation and comfort votes were respectively neutral and just comfortable in 16°C chamber with personalized heating, which were significantly higher than those in 18°C chamber without heating (p<0.01). The effect of personalized heating on improving thermal sensation and comfort was consistent in subjects of different age and gender. However, adults and the females were more sensitive to the effect of personalized heating and felt cooler and less comfort than children/elderly and the males respectively. Variations of the regional thermal sensation/comfort across human body were consistent with those of skin temperature. Personalized heating significantly improved human thermal sensation and comfort in non-uniform cooler environment, probably due to the fact that it increased skin temperature. However, the link between thermal sensation/comfort and variations of skin temperature is rather complex and warrant further investigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Polarization-color mapping strategies: catching up with color theory
NASA Astrophysics Data System (ADS)
Kruse, Andrew W.; Alenin, Andrey S.; Vaughn, Israel J.; Tyo, J. Scott
2017-09-01
Current visualization techniques for mapping polarization data to a color coordinates defined by the Hue, Saturation, Value (HSV) color representation are analyzed in the context of perceptual uniformity. Since HSV is not designed to be perceptually uniform, the extent of non-uniformity should be evaluated by using robust color difference formulae and by comparison to the state-of-the-art uniform color space CAM02-UCS. For mapping just angle of polarization with HSV hue, the results show clear non-uniformity and implications for how this can misrepresent the data. UCS can be used to create alternative mapping techniques that are perceptually uniform. Implementing variation in lightness may increase shape discrimination within the scene. Future work will be dedicated to measuring performance of both current and proposed methods using psychophysical analysis.
Scaling analysis of Anderson localizing optical fibers
NASA Astrophysics Data System (ADS)
Abaie, Behnam; Mafi, Arash
2017-02-01
Anderson localizing optical fibers (ALOF) enable a novel optical waveguiding mechanism; if a narrow beam is scanned across the input facet of the disordered fiber, the output beam follows the transverse position of the incoming wave. Strong transverse disorder induces several localized modes uniformly spread across the transverse structure of the fiber. Each localized mode acts like a transmission channel which carries a narrow input beam along the fiber without transverse expansion. Here, we investigate scaling of transverse size of the localized modes of ALOF with respect to transverse dimensions of the fiber. Probability density function (PDF) of the mode-area is applied and it is shown that PDF converges to a terminal shape at transverse dimensions considerably smaller than the previous experimental implementations. Our analysis turns the formidable numerical task of ALOF simulations into a much simpler problem, because the convergence of mode-area PDF to a terminal shape indicates that a much smaller disordered fiber, compared to previous numerical and experimental implementations, provides all the statistical information required for the precise analysis of the fiber.
Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn
2016-12-01
We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.
Prospects of studies on violence, adolescence and cortisol: a systematic literature review.
Lugarinho, Leonardo Planel; Avanci, Joviana Quintes; Pinto, Liana Wernersbach
2017-04-01
Violence has a negative impact on adolescents and affects their quality of life. It causes stress and requires the victim's adaptive capacity, which can cause psychological and biological changes. Hormone cortisol levels have been used as stress biomarker in several studies. This paper aims to perform a systematic literature review of publications on cortisol and violence involving teenagers from 2000 to 2013. Descriptors "cortisol", "violence" and "adolescent" were used in both English and Portuguese in this review, which included bibliographic databases PubMed/Medline, Lilacs, BVS and SciELO. Twelve papers were analyzed. Most studies involve participants from the United States, of both genders and without a control group. Different types of violence are studied, especially family violence, victimization or testimony. All studies used saliva to measure cortisol and no standard methodology was used for the analysis. Most studies (83.3%) found a statistically significant association between cortisol levels and exposure to violence. Results regarding gender, type of violence, socioeconomic status or cortisol analysis methods are not yet uniform.
The IRAF Fabry-Perot analysis package: Ring fitting
NASA Technical Reports Server (NTRS)
Shopbell, P. L.; Bland-Hawthorn, J.; Cecil, G.
1992-01-01
As introduced at ADASSI, a Fabry-Perot analysis package for IRAF is currently under development as a joint effort of ourselves and Frank Valdes of the IRAF group. Although additional portions of the package were also implemented, we report primarily on the development of a robust ring fitting task, useful for fitting the calibration rings obtained in Fabry-Perot observations. The general equation of an ellipse is fit to the shape of the rings, providing information on ring center, ellipticity, and position angle. Such parameters provide valuable information on the wavelength response of the etalon and the geometric stability of the system. Appropriate statistical weighting is applied to the pixels to account for increasing numbers with radius, the Lorentzian cross-section, and uneven illumination. The major problems of incomplete, non-uniform, and multiple rings are addressed with the final task capable of fitting rings regardless of center, cross-section, or completion. The task requires only minimal user intervention, allowing large numbers of rings to be fit in an extremely automated manner.
NASA Astrophysics Data System (ADS)
Giustini, M.
2016-05-01
We present the results of the uniform analysis of 46 XMM-Newton observations of six BAL and seven mini-BAL QSOs belonging to the Palomar-Green Quasar catalogue. Moderate-quality X-ray spectroscopy was performed with the EPIC-pn, and allowed to characterise the general source spectral shape to be complex, significantly deviating from a power law emission. A simple power law analysis in different energy bands strongly suggests absorption to be more significant than reflection in shaping the spectra. If allowing for the absorbing gas to be either partially covering the continuum emission source or to be ionised, large column densities of the order of 1022-1024 cm-2 are inferred. When the statistics was high enough, virtually every source was found to vary in spectral shape on various time scales, from years to hours. All in all these observational results are compatible with radiation driven accretion disk winds shaping the spectra of these intriguing cosmic sources.
Uniform California earthquake rupture forecast, version 2 (UCERF 2)
Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.
2009-01-01
The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiandra, Christian; Fusella, Marco; Filippi, Andrea Riccardo
2013-08-15
Purpose: Patient-specific quality assurance in volumetric modulated arc therapy (VMAT) brain stereotactic radiosurgery raises specific issues on dosimetric procedures, mainly represented by the small radiation fields associated with the lack of lateral electronic equilibrium, the need of small detectors and the high dose delivered (up to 30 Gy). Gafchromic{sup TM} EBT2 and EBT3 films may be considered the dosimeter of choice, and the authors here provide some additional data about uniformity correction for this new generation of radiochromic films.Methods: A new analysis method using blue channel for marker dye correction was proposed for uniformity correction both for EBT2 and EBT3more » films. Symmetry, flatness, and field-width of a reference field were analyzed to provide an evaluation in a high-spatial resolution of the film uniformity for EBT3. Absolute doses were compared with thermoluminescent dosimeters (TLD) as baseline. VMAT plans with multiple noncoplanar arcs were generated with a treatment planning system on a selected pool of eleven patients with cranial lesions and then recalculated on a water-equivalent plastic phantom by Monte Carlo algorithm for patient-specific QA. 2D quantitative dose comparison parameters were calculated, for the computed and measured dose distributions, and tested for statistically significant differences.Results: Sensitometric curves showed a different behavior above dose of 5 Gy for EBT2 and EBT3 films; with the use of inhouse marker-dye correction method, the authors obtained values of 2.5% for flatness, 1.5% of symmetry, and a field width of 4.8 cm for a 5 × 5 cm{sup 2} reference field. Compared with TLD and selecting a 5% dose tolerance, the percentage of points with ICRU index below 1 was 100% for EBT2 and 83% for EBT3. Patients analysis revealed statistically significant differences (p < 0.05) between EBT2 and EBT3 in the percentage of points with gamma values <1 (p= 0.009 and p= 0.016); the percent difference as well as the mean difference between calculated and measured isodoses (20% and 80%) were found not to be significant (p= 0.074, p= 0.185, and p= 0.57).Conclusions: Excellent performances in terms of dose homogeneity were obtained using a new blue channel method for marker-dye correction on both EBT2 and EBT3 Gafchromic{sup TM} films. In comparison with TLD, the passing rates for the EBT2 film were higher than for EBT3; a good agreement with estimated data by Monte Carlo algorithm was found for both films, with some statistically significant differences again in favor of EBT2. These results suggest that the use of Gafchromic{sup TM} EBT2 and EBT3 films is appropriate for dose verification measurements in VMAT stereotactic radiosurgery; taking into account the uncertainty associated with Gafchromic film dosimetry, the use of adequate action levels is strongly advised, in particular, for EBT3.« less
ERIC Educational Resources Information Center
Hoskins, Jo A.
2014-01-01
This study focuses on the analysis of the impact of school uniforms on student self-esteem and self-efficacy. In the past, schools have implemented school uniform policies in order to help improve student achievement as well as strengthen discipline. However, previous research has indicated an association, which is tenuous at best, with regard to…
ERIC Educational Resources Information Center
Ngo, Duc Minh
2009-01-01
Current methodologies used for the inference of thin film stresses through curvatures are strictly restricted to stress and curvature states which are assumed to remain uniform over the entire film/substrate system. In this dissertation, we extend these methodologies to non-uniform stress and curvature states for the single layer of thin film or…
Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.
2013-01-01
Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two-way ANOVA analysis suggest that R-value and pulse sequence type produce the largest influences on uniformity and PPI reconstruction method had relatively little effect. Conclusions: Two of the methods of measuring signal intensity uniformity, described by the (NEMA) MRI standards, consistently indicated a decrease in uniformity with an increase in R-value. Other methods investigated did not demonstrate consistent results for evaluating signal uniformity in MR images obtained by partially parallel methods. However, because the spatial distribution of noise affects uniformity, it is recommended that additional uniformity quality metrics be investigated for partially parallel MR images. PMID:23927345
Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.
Chalmers, R Philip
2018-06-01
This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.
Hemilä, Harri
2016-11-01
Analyses in nutritional epidemiology usually assume a uniform effect of a nutrient. Previously, four subgroups of the Alpha-Tocopherol, Beta-Carotene Cancer Prevention (ATBC) Study of Finnish male smokers aged 50-69 years were identified in which vitamin E supplementation either significantly increased or decreased the risk of pneumonia. The purpose of this present study was to quantify the level of true heterogeneity in the effect of vitamin E on pneumonia incidence using the I 2 statistic. The I 2 value estimates the percentage of total variation across studies that is explained by true differences in the treatment effect rather than by chance, with a range from 0 to 100 %. The I 2 statistic for the effect of vitamin E supplementation on pneumonia risk for five subgroups of the ATBC population was 89 % (95 % CI 78, 95 %), indicating that essentially all heterogeneity was true variation in vitamin E effect instead of chance variation. The I 2 statistic for heterogeneity in vitamin E effects on pneumonia risk was 92 % (95 % CI 80, 97 %) for three other ATBC subgroups defined by smoking level and leisure-time exercise level. Vitamin E decreased pneumonia risk by 69 % among participants who had the least exposure to smoking and exercised during leisure time (7·6 % of the ATBC participants), and vitamin E increased pneumonia risk by 68 % among those who had the highest exposure to smoking and did not exercise (22 % of the ATBC participants). These findings refute there being a uniform effect of vitamin E supplementation on the risk of pneumonia.
Lesbian classics in Germany? A film historical analysis of Mädchen in Uniform (1931 and 1958).
Mayer, Veronika
2012-01-01
The films Mädchen in Uniform (Leontine Sagan, 1931, Germany; Géza von Radványi, 1958, Germany) both tell the story of a schoolgirl falling in love with her teacher at a Prussian boarding school. Whereas the 1931 version is regarded as a lesbian classic in queer (German) cinema, the 1958 remake, however, is not even considered part of the lesbian genre. The following analysis examines both films within their historical context to answer the question what makes Mädchen in Uniform (1931) a lesbian film and why the remake did not measure up to its original's significance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L.; Li, Y.
2015-02-03
This paper analyzes the longitudinal space charge impedances of a round uniform beam inside a rectangular and parallel plate chambers using the image charge method. This analysis is valid for arbitrary wavelengths, and the calculations converge rapidly. The research shows that only a few of the image beams are needed to obtain a relative error less than 0.1%. The beam offset effect is also discussed in the analysis.
Heat Transfer to Longitudinal Laminar Flow Between Cylinders
NASA Technical Reports Server (NTRS)
Sparrow, Ephraim M.; Loeffler, Albert L. Jr.; Hubbard, H. A.
1960-01-01
Consideration is given to the fully developed heat transfer characteristics for longitudinal laminar flow between cylinders arranged in an equilateral triangular array. The analysis is carried out for the condition of uniform heat transfer per unit length. Solutions are obtained for the temperature distribution, and from these, Nusselt numbers are derived for a wide range of spacing-to-diameter ratios. It is found that as the spacing ratio increases, so also does the wall-to-bulk temperature difference for a fixed heat transfer per unit length. Corresponding to a uniform surface temperature around the circumference of a cylinder, the circumferential variation of the local heat flux is computed. For spacing ratios of 1.5 - 2.0 and greater, uniform peripheral wall temperature and uniform peripheral heat flux are simultaneously achieved. A simplified analysis which neglects circumferential variations is also carried out, and the results are compared with those from the more exact formulation.
NASA Astrophysics Data System (ADS)
Wattanasakulpong, Nuttawit; Chaikittiratana, Arisara; Pornpeerakeat, Sacharuck
2018-06-01
In this paper, vibration analysis of functionally graded porous beams is carried out using the third-order shear deformation theory. The beams have uniform and non-uniform porosity distributions across their thickness and both ends are supported by rotational and translational springs. The material properties of the beams such as elastic moduli and mass density can be related to the porosity and mass coefficient utilizing the typical mechanical features of open-cell metal foams. The Chebyshev collocation method is applied to solve the governing equations derived from Hamilton's principle, which is used in order to obtain the accurate natural frequencies for the vibration problem of beams with various general and elastic boundary conditions. Based on the numerical experiments, it is revealed that the natural frequencies of the beams with asymmetric and non-uniform porosity distributions are higher than those of other beams with uniform and symmetric porosity distributions.
International migration, 1995: some reflections on an exceptional year.
Bedford, R
1996-10-01
"This paper examines the 1995 international migration statistics in the context of New Zealand's immigration policy, and with reference to the impact of migration on population change in 1995. Particular attention is focused on trying to unravel and interpret the statistics relating to net migration. Considerable confusion has arisen in the public debate about immigration because of uniformed and, at times, quite misleading use of information supplied by Statistics New Zealand and the Department of Labour.... This is a reprinted version of an article originally published in the New Zealand Journal of Geography in April 1996. The article has been reprinted because a number of tables in the earlier version were incorrectly reproduced. Any inconvenience caused by this problem is regretted." excerpt
Analysis of Error Propagation Within Hierarchical Air Combat Models
2016-06-01
Model Simulation MANA Map Aware Non-Uniform Automata MCET Mine Warfare Capabilities and Effectiveness Tool MOE measure of effectiveness MOP measure of...model for a two-versus-two air engagement between jet fighters in the stochastic, agent-based Map Aware Non- uniform Automata (MANA) simulation...Master’s thesis, Naval Postgraduate School, Monterey, CA. McIntosh, G. C. (2009). MANA-V (Map aware non-uniform automata – Vector) supplementary manual
Buckling analysis of Big Dee Vacuum Vessel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lightner, S.; Gallix, R.
1983-12-01
A simplified three-dimensional shell buckling analysis of the GA Technologies Inc., Big Dee Vacuum Vessel (V/V) was performed using the finite element program TRICO. A coarse-mesh linear elastic model, which accommodated the support boundary conditions, was used to determine the buckling mode shape under a uniform external pressure. Using this buckling mode shape, refined models were used to calculate the linear buckling load (P/sub crit/) more accurately. Several different designs of the Big Dee V/V were considered in this analysis. The supports for the V/V were equally-spaced radial pins at the outer diameter of the mid-plane. For all the casesmore » considered, the buckling mode was axisymmetric in the toroidal direction. Therefore, it was possible to use only a small angular sector of a toric shell for the refined analysis. P/sub crit/ for the Big Dee is about 60 atm for a uniform external pressure. Also investigated in this analysis were the effects of geometrical imperfections and non-uniform pressure distributions.« less
Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina
2017-01-01
Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581
2007-12-05
yield record setting carrier lifetime values and very low concentrations of point defects. Epiwafers delivered for fabrication of RF static induction ...boules and on improved furnace uniformity (adding rotation, etc.). Pareto analysis was performed on wafer yield loss at the start of every quarter...100mm PVT process. Work focused on modeling the process for longer (50 mm) boules and on improved furnace uniformity. Pareto analysis was performed
Higher Education Accounting Manual. Utah Coordinating Council of Higher Education.
ERIC Educational Resources Information Center
Utah State Coordinating Council of Higher Education, Salt Lake City.
Recognition of a critical need for accurate and detailed information to refine the process of budgeting funds for higher education in Utah led to the preparation of this accounting manual for universities and colleges in the state. The manual presents guidelines for the uniform accounting and reporting of financial and statistical data, and is…
Who Are the Owners of Firearms Used in Adolescent Suicides?
ERIC Educational Resources Information Center
Johnson, Renee M.; Barber, Catherine; Azrael, Deborah; Clark, David E.; Hemenway, David
2010-01-01
In this brief report, the source of firearms used in adolescent suicides was examined using data from the National Violent Injury Statistics System, the pilot to the CDC's National Violent Death Reporting System, a uniform reporting system for violent and firearm-related deaths. Data represent the 63 firearm suicides among youth (less than 18 yrs)…
Crib Work--An Evaluation of a Problem-Based Learning Experiment: Preliminary Results
ERIC Educational Resources Information Center
Walsh, Vonda K.; Bush, H. Francis
2013-01-01
Problem-based learning has been proven to be successful in both medical colleges and physics classes, but not uniformly across all disciplines. A college course in probability and statistics was used as a setting to test the effectiveness of problem-based learning when applied to homework. This paper compares the performances of the students from…
ERIC Educational Resources Information Center
Tay, Louis; Vermunt, Jeroen K.; Wang, Chun
2013-01-01
We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic.…
Distorted turbulence submitted to frame rotation: RDT and LES results
NASA Technical Reports Server (NTRS)
Godeferd, Fabien S.
1995-01-01
The objective of this effort is to carry the analysis of Lee et al. (1990) to the case of shear with rotation. We apply the RDT approximation to turbulence submitted to frame rotation for the case of a uniformly sheared flow and compare its mean statistics to results of high resolution DNS of a rotating plane channel flow. In the latter, the mean velocity profile is modified by the Coriolis force, and accordingly, different regions in the channel can be identified. The properties of the plane pure strain turbulence submitted to frame rotation are, in addition, investigated in spectral space, which shows the usefulness of the spectral RDT approach. This latter case is investigated here. Among the general class of quadratic flows, this case does not follow the same stability properties as the others since the related mean vorticity is zero.
Reciprocity and depressive symptoms in Belgian workers: a cross-sectional multilevel analysis.
De Clercq, Bart; Clays, Els; Janssens, Heidi; De Bacquer, Dirk; Casini, Annalisa; Kittel, France; Braeckman, Lutgart
2013-07-01
This study examines the multidimensional association between reciprocity at work and depressive symptoms. Data from the Belgian BELSTRESS survey (32 companies; N = 24,402) were analyzed. Multilevel statistical procedures were used to account for company-level associations while controlling for individual-level associations. Different dimensions of individual reciprocity were negatively associated with depressive symptoms. On the company level, only vertical emotional reciprocity was negatively associated (β = -4.660; SE = 1.117) independently from individual reciprocity (β = -0.557; SE = 0.042). Complex interactions were found such that workplace reciprocity (1) may not uniformly benefit individuals and (2) related differently to depressive symptoms, depending on occupational group. This study extends the existing literature with evidence on the multidimensional, contextual, and cross-level interaction associations of reciprocity as a key aspect of social capital on depressive symptoms.
Can Airports be a Green Source of Energy?
NASA Astrophysics Data System (ADS)
Solus, Daniel; Archer, Charysse; Malone, Brandi; Chesterfield, Norrisha; Jackson, Lateria; Erenso, Daniel
2008-04-01
When Boeing 747 lands its energy (896MJ) is dissipated by friction. Our statistical analysis for commercial aircrafts landing at the Nashville International Airport (BNA) have discovered that nearly 30 average single family households can be powered by the dissipated energy on a monthly basis. It may be possible to land an airplane on a frictionless surface and transform its energy into electrical energy. To demonstrate this we have conducted theoretical and experimental studies using a conducting rod attached to a toy car sliding on a U-shaped conducting wire placed in a uniform magnetic field track. The results concluded that this technique requires a very strong magnetic field. We then used a cylindrical magnet mounted on toy trucks and set to roll on a track inside a solenoid and been able to generate an ac voltage (4-10 volts).
NASA Technical Reports Server (NTRS)
Radke, C. R.; Meyer, T. R.
2014-01-01
The spray characteristics of a Liquid-Liquid Double Swirl Coaxial Injector were studied using noninvasive Optical, Laser, and X-ray diagnostics. A parametric study of injector exit geometry demonstrated that spray breakup time, breakup type and sheet stability could be controlled with exit geometry. Phase Doppler Particle Analysis characterized droplet statistics and non-dimensional droplet parameters over a range of inlet conditions and for various fluids allowing for a study on the role of specific fluid properties in atomization. Further, x-ray radiographs allowed for investigations of sheet thickness and breakup length to be quantified for different recess exits and inlet pressures. Finally Computed Tomography scans revealed that the spray cone was distinctively non-uniform and comprised of several pockets of increased mass flux.
Aneurysm permeability following coil embolization: packing density and coil distribution.
Chueh, Ju-Yu; Vedantham, Srinivasan; Wakhloo, Ajay K; Carniato, Sarena L; Puri, Ajit S; Bzura, Conrad; Coffin, Spencer; Bogdanov, Alexei A; Gounis, Matthew J
2015-09-01
Rates of durable aneurysm occlusion following coil embolization vary widely, and a better understanding of coil mass mechanics is desired. The goal of this study is to evaluate the impact of packing density and coil uniformity on aneurysm permeability. Aneurysm models were coiled using either Guglielmi detachable coils or Target coils. The permeability was assessed by taking the ratio of microspheres passing through the coil mass to those in the working fluid. Aneurysms containing coil masses were sectioned for image analysis to determine surface area fraction and coil uniformity. All aneurysms were coiled to a packing density of at least 27%. Packing density, surface area fraction of the dome and neck, and uniformity of the dome were significantly correlated (p<0.05). Hence, multivariate principal components-based partial least squares regression models were used to predict permeability. Similar loading vectors were obtained for packing and uniformity measures. Coil mass permeability was modeled better with the inclusion of packing and uniformity measures of the dome (r(2)=0.73) than with packing density alone (r(2)=0.45). The analysis indicates the importance of including a uniformity measure for coil distribution in the dome along with packing measures. A densely packed aneurysm with a high degree of coil mass uniformity will reduce permeability. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Placidi, M.; Ganapathisubramani, B.
2018-04-01
Wind-tunnel experiments were carried out on fully-rough boundary layers with large roughness (δ /h ≈ 10, where h is the height of the roughness elements and δ is the boundary-layer thickness). Twelve different surface conditions were created by using LEGO™ bricks of uniform height. Six cases are tested for a fixed plan solidity (λ _P) with variations in frontal density (λ _F), while the other six cases have varying λ _P for fixed λ _F. Particle image velocimetry and floating-element drag-balance measurements were performed. The current results complement those contained in Placidi and Ganapathisubramani (J Fluid Mech 782:541-566, 2015), extending the previous analysis to the turbulence statistics and spatial structure. Results indicate that mean velocity profiles in defect form agree with Townsend's similarity hypothesis with varying λ _F, however, the agreement is worse for cases with varying λ _P. The streamwise and wall-normal turbulent stresses, as well as the Reynolds shear stresses, show a lack of similarity across most examined cases. This suggests that the critical height of the roughness for which outer-layer similarity holds depends not only on the height of the roughness, but also on the local wall morphology. A new criterion based on shelter solidity, defined as the sheltered plan area per unit wall-parallel area, which is similar to the `effective shelter area' in Raupach and Shaw (Boundary-Layer Meteorol 22:79-90, 1982), is found to capture the departure of the turbulence statistics from outer-layer similarity. Despite this lack of similarity reported in the turbulence statistics, proper orthogonal decomposition analysis, as well as two-point spatial correlations, show that some form of universal flow structure is present, as all cases exhibit virtually identical proper orthogonal decomposition mode shapes and correlation fields. Finally, reduced models based on proper orthogonal decomposition reveal that the small scales of the turbulence play a significant role in assessing outer-layer similarity.
Structures in magnetohydrodynamic turbulence: Detection and scaling
NASA Astrophysics Data System (ADS)
Uritsky, V. M.; Pouquet, A.; Rosenberg, D.; Mininni, P. D.; Donovan, E. F.
2010-11-01
We present a systematic analysis of statistical properties of turbulent current and vorticity structures at a given time using cluster analysis. The data stem from numerical simulations of decaying three-dimensional magnetohydrodynamic turbulence in the absence of an imposed uniform magnetic field; the magnetic Prandtl number is taken equal to unity, and we use a periodic box with grids of up to 15363 points and with Taylor Reynolds numbers up to 1100. The initial conditions are either an X -point configuration embedded in three dimensions, the so-called Orszag-Tang vortex, or an Arn’old-Beltrami-Childress configuration with a fully helical velocity and magnetic field. In each case two snapshots are analyzed, separated by one turn-over time, starting just after the peak of dissipation. We show that the algorithm is able to select a large number of structures (in excess of 8000) for each snapshot and that the statistical properties of these clusters are remarkably similar for the two snapshots as well as for the two flows under study in terms of scaling laws for the cluster characteristics, with the structures in the vorticity and in the current behaving in the same way. We also study the effect of Reynolds number on cluster statistics, and we finally analyze the properties of these clusters in terms of their velocity-magnetic-field correlation. Self-organized criticality features have been identified in the dissipative range of scales. A different scaling arises in the inertial range, which cannot be identified for the moment with a known self-organized criticality class consistent with magnetohydrodynamics. We suggest that this range can be governed by turbulence dynamics as opposed to criticality and propose an interpretation of intermittency in terms of propagation of local instabilities.
Impact of specimen adequacy on the assessment of renal allograft biopsy specimens.
Cimen, S; Geldenhuys, L; Guler, S; Imamoglu, A; Molinari, M
2016-01-01
The Banff classification was introduced to achieve uniformity in the assessment of renal allograft biopsies. The primary aim of this study was to evaluate the impact of specimen adequacy on the Banff classification. All renal allograft biopsies obtained between July 2010 and June 2012 for suspicion of acute rejection were included. Pre-biopsy clinical data on suspected diagnosis and time from renal transplantation were provided to a nephropathologist who was blinded to the original pathological report. Second pathological readings were compared with the original to assess agreement stratified by specimen adequacy. Cohen's kappa test and Fisher's exact test were used for statistical analyses. Forty-nine specimens were reviewed. Among these specimens, 81.6% were classified as adequate, 6.12% as minimal, and 12.24% as unsatisfactory. The agreement analysis among the first and second readings revealed a kappa value of 0.97. Full agreement between readings was found in 75% of the adequate specimens, 66.7 and 50% for minimal and unsatisfactory specimens, respectively. There was no agreement between readings in 5% of the adequate specimens and 16.7% of the unsatisfactory specimens. For the entire sample full agreement was found in 71.4%, partial agreement in 20.4% and no agreement in 8.2% of the specimens. Statistical analysis using Fisher's exact test yielded a P value above 0.25 showing that - probably due to small sample size - the results were not statistically significant. Specimen adequacy may be a determinant of a diagnostic agreement in renal allograft specimen assessment. While additional studies including larger case numbers are required to further delineate the impact of specimen adequacy on the reliability of histopathological assessments, specimen quality must be considered during clinical decision making while dealing with biopsy reports based on minimal or unsatisfactory specimens.
A Design Method and an Application for Contrarotating Propellers
1990-01-01
force gen- stricted to uniform flow , it fhowed that the analysis of CR pro- erated by the contrarotating propeller to be balanced by the drag... uniform flow at where the operating point of the propeller for a typical high-speed sur- ,/2 face ship. Force measurements for the CR propelier in... experimental thrust coefficient, torque Bronze. Since this propeller set is designed for uniform flow , coefficient, and efficiency for the CR propellers
Yura, Harold T; Fields, Renny A
2011-06-20
Level crossing statistics is applied to the complex problem of atmospheric turbulence-induced beam wander for laser propagation from ground to space. A comprehensive estimate of the single-axis wander angle temporal autocorrelation function and the corresponding power spectrum is used to develop, for the first time to our knowledge, analytic expressions for the mean angular level crossing rate and the mean duration of such crossings. These results are based on an extension and generalization of a previous seminal analysis of the beam wander variance by Klyatskin and Kon. In the geometrical optics limit, we obtain an expression for the beam wander variance that is valid for both an arbitrarily shaped initial beam profile and transmitting aperture. It is shown that beam wander can disrupt bidirectional ground-to-space laser communication systems whose small apertures do not require adaptive optics to deliver uniform beams at their intended target receivers in space. The magnitude and rate of beam wander is estimated for turbulence profiles enveloping some practical laser communication deployment options and suggesting what level of beam wander effects must be mitigated to demonstrate effective bidirectional laser communication systems.
Quantile Regression for Recurrent Gap Time Data
Luo, Xianghua; Huang, Chiung-Yu; Wang, Lan
2014-01-01
Summary Evaluating covariate effects on gap times between successive recurrent events is of interest in many medical and public health studies. While most existing methods for recurrent gap time analysis focus on modeling the hazard function of gap times, a direct interpretation of the covariate effects on the gap times is not available through these methods. In this article, we consider quantile regression that can provide direct assessment of covariate effects on the quantiles of the gap time distribution. Following the spirit of the weighted risk-set method by Luo and Huang (2011, Statistics in Medicine 30, 301–311), we extend the martingale-based estimating equation method considered by Peng and Huang (2008, Journal of the American Statistical Association 103, 637–649) for univariate survival data to analyze recurrent gap time data. The proposed estimation procedure can be easily implemented in existing software for univariate censored quantile regression. Uniform consistency and weak convergence of the proposed estimators are established. Monte Carlo studies demonstrate the effectiveness of the proposed method. An application to data from the Danish Psychiatric Central Register is presented to illustrate the methods developed in this article. PMID:23489055
Sample size determination in combinatorial chemistry.
Zhao, P L; Zambias, R; Bolognese, J A; Boulton, D; Chapman, K
1995-01-01
Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1). PMID:11607586
Amelogenin in odontogenic cysts and tumors: An immunohistochemical study
Anigol, Praveen; Kamath, Venkatesh V.; Satelur, Krishnanand; Anand, Nagaraja; Yerlagudda, Komali
2014-01-01
Background: Amelogenins are the major enamel proteins that play a major role in the biomineralization and structural organization of enamel. Aberrations of enamel-related proteins are thought to be involved in oncogenesis of odontogenic epithelium. The expression of amelogenin is possibly an indicator of differentiation of epithelial cells in the odontogenic lesions. Aims and Objectives: The present study aimed to observe the expression of amelogenin immunohistochemically in various odontogenic lesions. Materials and Methods: Paraffin sections of 40 odontogenic lesions were stained immunohistochemically with amelogenin antibodies. The positivity, pattern and intensity of expression of the amelogenin antibody were assessed, graded and statistically compared between groups of odontogenic cysts and tumors. Results: Almost all the odontogenic lesions expressed amelogenin in the epithelial component with the exception of an ameloblastic carcinoma. Differing grades of intensity and pattern were seen between the cysts and tumors. Intensity of expression was uniformly prominent in all odontogenic lesions with hard tissue formation. Statistical analysis however did not indicate significant differences between the two groups. Conclusion: The expression of amelogenin antibody is ubiquitous in odontogenic tissues and can be used as a definitive marker for identification of odontogenic epithelium. PMID:25937729
On statistical irregularity of stratospheric warming occurrence during northern winters
NASA Astrophysics Data System (ADS)
Savenkova, Elena N.; Gavrilov, Nikolai M.; Pogoreltsev, Alexander I.
2017-10-01
Statistical analysis of dates of warming events observed during the years 1981-2016 at different stratospheric altitudes reveals their non-uniform distributions during northern winter months with maxima at the beginning of January, at the end of January - beginning of February and at the end of February. Climatology of zonal-mean zonal wind, deviations of temperature from its winter-averaged values, and planetary wave (PW) characteristics at high and middle northern latitudes in the altitude range from the ground up to 60 km is studied using the database of meteorological reanalysis MERRA. Climatological temperature deviations averaged over the 60-90°N latitudinal bands reveal cooler and warmer layers descending due to seasonal changes during the polar night. PW amplitudes and upward Eliassen-Palm fluxes averaged over 36 years have periodical maxima with the main maximum at the beginning of January at altitudes 40-50 km. During the above-mentioned intervals of more frequent occurrence of stratospheric warming events, maxima of PW amplitudes and Eliassen-Palm fluxes, also minima of eastward winds in the high-latitude northern stratosphere have been found. Climatological intra-seasonal irregularities of stratospheric warming dates could indicate reiterating phases of stratospheric vacillations in different years.
Some aspects of the aeroacoustics of high-speed jets
NASA Technical Reports Server (NTRS)
Lighthill, James
1993-01-01
Some of the background to contemporary jet aeroacoustics is addressed. Then scaling laws for noise generation by low-Mach-number airflows and by turbulence convected at 'not so low' Mach number is reviewed. These laws take into account the influence of Doppler effects associated with the convection of aeroacoustic sources. Next, a uniformly valid Doppler-effect approximation exhibits the transition, with increasing Mach number of convection, from compact-source radiation at low Mach numbers to a statistical assemblage of conical shock waves radiated by eddies convected at supersonic speed. In jets, for example, supersonic eddy convection is typically found for jet exit speeds exceeding twice the atmospheric speed of sound. The Lecture continues by describing a new dynamical theory of the nonlinear propagation of such statistically random assemblages of conical shock waves. It is shown, both by a general theoretical analysis and by an illustrative computational study, how their propagation is dominated by a characteristic 'bunching' process. That process associated with a tendency for shock waves that have already formed unions with other shock waves to acquire an increased proneness to form further unions - acts so as to enhance the high-frequency part of the spectrum of noise emission from jets at these high exit speeds.
Shaping Ability of Single-file Systems with Different Movements: A Micro-computed Tomographic Study.
Santa-Rosa, Joedy; de Sousa-Neto, Manoel Damião; Versiani, Marco Aurelio; Nevares, Giselle; Xavier, Felipe; Romeiro, Kaline; Cassimiro, Marcely; Leoni, Graziela Bianchi; de Menezes, Rebeca Ferraz; Albuquerque, Diana
2016-01-01
This study aimed to perform a rigorous sample standardization and also evaluate the preparation of mesiobuccal (MB) root canals of maxillary molars with severe curvatures using two single-file engine-driven systems (WaveOne with reciprocating motion and OneShape with rotary movement), using micro-computed tomography (micro-CT). Ten MB roots with single canals were included, uniformly distributed into two groups (n=5). The samples were prepared with a WaveOne or OneShape files. The shaping ability and amount of canal transportation were assessed by a comparison of the pre- and post-instrumentation micro-CT scans. The Kolmogorov-Smirnov and t-tests were used for statistical analysis. The level of significance was set at 0.05. Instrumentation of canals increased their surface area and volume. Canal transportation occurred in coronal, middle and apical thirds and no statistical difference was observed between the two systems (P>0.05). In apical third, significant differences were found between groups in canal roundness (in 3 mm level) and perimeter (in 3 and 4 mm levels) (P<0.05). The WaveOne and One Shape single-file systems were able to shape curved root canals, producing minor changes in the canal curvature.
Failure statistics for commercial lithium ion batteries: A study of 24 pouch cells
NASA Astrophysics Data System (ADS)
Harris, Stephen J.; Harris, David J.; Li, Chen
2017-02-01
There are relatively few publications that assess capacity decline in enough commercial cells to quantify cell-to-cell variation, but those that do show a surprisingly wide variability. Capacity curves cross each other often, a challenge for efforts to measure the state of health and predict the remaining useful life (RUL) of individual cells. We analyze capacity fade statistics for 24 commercial pouch cells, providing an estimate for the time to 5% failure. Our data indicate that RUL predictions based on remaining capacity or internal resistance are accurate only once the cells have already sorted themselves into "better" and "worse" ones. Analysis of our failure data, using maximum likelihood techniques, provide uniformly good fits for a variety of definitions of failure with normal and with 2- and 3-parameter Weibull probability density functions, but we argue against using a 3-parameter Weibull function for our data. pdf fitting parameters appear to converge after about 15 failures, although business objectives should ultimately determine whether data from a given number of batteries provides sufficient confidence to end lifecycle testing. Increased efforts to make batteries with more consistent lifetimes should lead to improvements in battery cost and safety.
Design of highly uniform spool and bar horns for ultrasonic bonding.
Kim, Sun-Rak; Lee, Jae Hak; Yoo, Choong D; Song, Jun-Yeob; Lee, Seung S
2011-10-01
Although the groove and slot have been widely utilized for horn design to achieve high uniformity, their effects on uniformity have not been analyzed thoroughly. In this work, spool and bar horns for ultrasonic bonding are designed in a systematic way using the design of experiments (DOE) to achieve high amplitude uniformity of the horn. Three-dimensional modal analysis is conducted to predict the natural frequency, amplitude, and stress of the horns, and the DOE is employed to analyze the effects of the groove and slot on the amplitude uniformity. The design equations are formulated to determine the optimum dimensions of the groove and slot, and the uniformity is found to be influenced most significantly by the groove depth and slot width. Displacements of the spool and bar horns were measured using a laser Doppler vibrometer (LDV), and the predicted results are in good agreement with the experimental data.
Analysis of Fractured Teeth Utilizing Digital Microscopy: A Pilot Study
2016-06-01
ANALYSIS OF FRACTURED TEETH UTILIZING DIGITAL MICROSCOPY: A PILOT STUDY by Thomas Gene Cooper, D.M.D., M.P.H. Lieutenant Commander, Dental Corps...United States Navy A thesis submitted to the Faculty of the Endodontic Graduate Program Naval Postgraduate Dental School Uniformed Services...Postgraduate Dental School Uniformed Services University of the Health Sciences Bethesda, Maryland CERTIFICATE OF APPROVAL MASTER’S THESIS This is to
Template protection and its implementation in 3D face recognition systems
NASA Astrophysics Data System (ADS)
Zhou, Xuebing
2007-04-01
As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.
Argalji, Nina; Silva, Eduardo Moreira da; Cury-Saramago, Adriana; Mattos, Claudia Trindade
2017-08-21
The objective of this study was to compare coating dimensions and surface characteristics of two different esthetic covered nickel-titanium orthodontic rectangular archwires, as-received from the manufacturer and after oral exposure. The study was designed for comparative purposes. Both archwires, as-received from the manufacturer, were observed using a stereomicroscope to measure coating thickness and inner metallic dimensions. The wires were also exposed to oral environment in 11 orthodontic active patients for 21 days. After removing the samples, stereomicroscopy images were captured, coating loss was measured and its percentage was calculated. Three segments of each wire (one as-received and two after oral exposure) were observed using scanning electron microscopy for a qualitative analysis of the labial surface of the wires. The Lilliefors test and independent t-test were applied to verify normality of data and statistical differences between wires, respectively. The significance level adopted was 0.05. The results showed that the differences between the wires while comparing inner height and thickness were statistically significant (p < 0.0001). In average, the most recently launched wire presented a coating thickness twice that of the control wire, which was also a statistically significant difference. The coating loss percentage was also statistically different (p = 0.0346) when the latest launched wire (13.27%) was compared to the control (29.63%). In conclusion, the coating of the most recent wire was thicker and more uniform, whereas the control had a thinner coating on the edges. After oral exposure, both tested wires presented coating loss, but the most recently launched wire exhibited better results.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Pindera, Marek-Jerzy; Aboudi, Jacob
2003-01-01
This report summarizes the results of a numerical investigation into the spallation mechanism in plasma-sprayed thermal barrier coatings observed under spatially-uniform cyclic thermal loading. The analysis focuses on the evolution of local stress and inelastic strain fields in the vicinity of the rough top/bond coat interface during thermal cycling, and how these fields are influenced by the presence of an oxide film and spatially uniform and graded distributions of alumina particles in the metallic bond coat aimed at reducing the top/bond coat thermal expansion mismatch. The impact of these factors on the potential growth of a local horizontal delamination at the rough interface's crest is included. The analysis is conducted using the Higher-Order Theory for Functionally Graded Materials with creep/relaxation constituent modeling capabilities. For two-phase bond coat microstructures, both the actual and homogenized properties are employed in the analysis. The results reveal the important contributions of both the normal and shear stress components to the delamination growth potential in the presence of an oxide film, and suggest mixed-mode crack propagation. The use of bond coats with uniform or graded microstructures is shown to increase the potential for delamination growth by increasing the magnitude of the crack-tip shear stress component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arutyunyan, R.V.; Bol`shov, L.A.; Vasil`ev, S.K.
1994-06-01
The objective of this study was to clarify a number of issues related to the spatial distribution of contaminants from the Chernobyl accident. The effects of local statistics were addressed by collecting and analyzing (for Cesium 137) soil samples from a number of regions, and it was found that sample activity differed by a factor of 3-5. The effect of local non-uniformity was estimated by modeling the distribution of the average activity of a set of five samples for each of the regions, with the spread in the activities for a {+-}2 range being equal to 25%. The statistical characteristicsmore » of the distribution of contamination were then analyzed and found to be a log-normal distribution with the standard deviation being a function of test area. All data for the Bryanskaya Oblast area were analyzed statistically and were adequately described by a log-normal function.« less
Analysis of one gravitational slope cycle
NASA Astrophysics Data System (ADS)
Palis, Edouard; Lebourg, Thomas; Vidal, Maurin; Tric, Emmanuel
2015-04-01
Since about twenty years of studies on landslides, we realized the role and subtle interactions that existed between the structural complexity, masses dynamics and complex internal circulation of fluids. The La Clapière DSL (Deep-Seated Landslide) is now very well known by the scientific community (volume, impact, challenges, observations...), but this mass of knowledge, has not yet been compiled nor looked through a coupled analysis of its spatial and temporal variability. Since 2007, a will to share and access to uniform data was set up by the Versant Instabilities Multidisciplinary Observatory (OMIV, National Service of French Observation (SNO)). This observatory (with associated laboratories) allowed the installation of permanent and autonomous measuring stations: GPS, meteorology, seismology, water chemistry sources. For two years now, a permanent electrical tomography device is installed at the bottom of the slope to complement the current monitoring system, and allowed a deeper understanding of the physical changes in the massif. The analysis of these data allows to observe different dynamic regimes, as well as different responses to external factors: instantaneous, delayed, long-term variability. The purpose of this synthesis study is to analyze the temporal and spatial evolution of the electrical resistivity, displacement and hydrometeors for one year cycle (November 2012 to November 2013). Thus, a qualitative and statistical approach by clusters, principal component analysis (PCA), and temporal pseudo-3D of these variables was established. This new statistical study also explains the major role of the fault and the base of the landslide, as well as the chronology of the water flow in the massif, allowing a better understanding of the complex and uneven in time dynamic in this area.
NASA Technical Reports Server (NTRS)
Djorgovski, S. George
1994-01-01
We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.
Influence of different operating conditions on irrigation uniformity with microperforated tapes
NASA Astrophysics Data System (ADS)
Moreno Pizani, María Alejandra; Jesús Farías Ramírez, Asdrúbal
2013-04-01
Irrigated agriculture is a safe alternative to meet the growing demand for food. Numerous studies show that proper management of localized irrigation can increase crop yields and reduce soil salinization. Therefore, periodic field systems irrigation assessments are needed in order to optimize the use efficiency of irrigation water, as well as, to increase the agricultural area covered by the same amount of water and to reduce the environmental impact. It was assessed the behavior of micro perforated tapes under different operating conditions, crops and regions of Venezuela. Evaluations were made on irrigated areas using Santeno ® Type I tape with the following crops: Banana (Musa sp), lettuce (Lactuca sativa L.), carrot (Daucus carota L) and forage sugar cane (Saccharum officinarum). In the other hand, Santeno ® Type II tape was used with papaya (Carica papaya L.) and melon (Cucumis melo L.) crops (the last crop using inverted irrigation tape). The procedures used for sampling and determining the uniformity indices of the system were performed using a series of adjustments to the methodology proposed by Keller and Karmeli (1975), Deniculi (1980) and De Santa and De Juan (1993), in order to increase the number of observations as a function of irrigation time. The calculated irrigation uniformity indices were as follow: Distribution Coefficient (UD), Uniformity Coefficient (CUC), Coefficient of Variation of Flows (CV) and Statistical Uniformity Coefficient (Us). The indices characterization was made according to Merrian and Keller (1978); Bralts (1986); Pizarro (1990) y ASAE (1996), respectively. The results showed that the irrigation uniformity for the evaluated systems varied from excellent to unacceptable, mainly due to the lack of maintenance and the absent of manometric connectors. Among the findings, it is possible to highlight the need for technical support to farmers, both in the installation, management and maintenance of irrigation systems. In this sense, it is proposed to establish a simple and reliable procedure to evaluate the irrigation uniformity in the field, which should be available for farmers and feasible for researchers.
Adapting radiotherapy to hypoxic tumours
NASA Astrophysics Data System (ADS)
Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag
2006-10-01
In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.
Statistical Distribution Analysis of Lineated Bands on Europa
NASA Astrophysics Data System (ADS)
Chen, T.; Phillips, C. B.; Pappalardo, R. T.
2016-12-01
Tina Chen, Cynthia B. Phillips, Robert T. Pappalardo Europa's surface is covered with intriguing linear and disrupted features, including lineated bands that range in scale and size. Previous studies have shown the possibility of an icy shell at the surface that may be concealing a liquid ocean with the potential to harboring life (Pappalardo et al., 1999). Utilizing the high-resolution imaging data from the Galileo spacecraft, we examined bands through a morphometric and morphologic approach. Greeley et al. (2000) and Procktor et al. (2002) have defined bands as wide, hummocky to lineated features that have distinctive surface texture and albedo compared to its surrounding terrain. We took morphometric measurements of lineated bands to find correlations in properties such as size, location, and orientation, and to shed light on formation models. We will present our measurements of over 100 bands on Europa that was mapped on the USGS Europa Global Mosaic Base Map (2002). We also conducted a statistical analysis to understand the distribution of lineated bands globally, and whether the widths of the bands differ by location. Our preliminary analysis from our statistical distribution evaluation, combined with the morphometric measurements, supports a uniform ice shell thickness for Europa rather than one that varies geographically. References: Greeley, Ronald, et al. "Geologic mapping of Europa." Journal of Geophysical Research: Planets 105.E9 (2000): 22559-22578.; Pappalardo, R. T., et al. "Does Europa have a subsurface ocean? Evaluation of the geological evidence." Journal of Geophysical Research: Planets 104.E10 (1999): 24015-24055.; Prockter, Louise M., et al. "Morphology of Europan bands at high resolution: A mid-ocean ridge-type rift mechanism." Journal of Geophysical Research: Planets 107.E5 (2002).; U.S. Geological Survey, 2002, Controlled photomosaic map of Europa, Je 15M CMN: U.S. Geological Survey Geologic Investigations Series I-2757, available at http://pubs.usgs.gov/imap/i2757/
Cervical and Incisal Marginal Discrepancy in Ceramic Laminate Veneering Materials: A SEM Analysis
Ranganathan, Hemalatha; Ganapathy, Dhanraj M.; Jain, Ashish R.
2017-01-01
Context: Marginal discrepancy influenced by the choice of processing material used for the ceramic laminate veneers needs to be explored further for better clinical application. Aims: This study aimed to evaluate the amount of cervical and incisal marginal discrepancy associated with different ceramic laminate veneering materials. Settings and Design: This was an experimental, single-blinded, in vitro trial. Subjects and Methods: Ten central incisors were prepared for laminate veneers with 2 mm uniform reduction and heavy chamfer finish line. Ceramic laminate veneers fabricated over the prepared teeth using four different processing materials were categorized into four groups as Group I - aluminous porcelain veneers, Group II - lithium disilicate ceramic veneers, Group III - lithium disilicate-leucite-based veneers, Group IV - zirconia-based ceramic veneers. The cervical and incisal marginal discrepancy was measured using a scanning electron microscope. Statistical Analysis Used: ANOVA and post hoc Tukey honest significant difference (HSD) tests were used for statistical analysis. Results: The cervical and incisal marginal discrepancy for four groups was Group I - 114.6 ± 4.3 μm, 132.5 ± 6.5 μm, Group II - 86.1 ± 6.3 μm, 105.4 ± 5.3 μm, Group III - 71.4 ± 4.4 μm, 91.3 ± 4.7 μm, and Group IV - 123.1 ± 4.1 μm, 142.0 ± 5.4 μm. ANOVA and post hoc Tukey HSD tests observed a statistically significant difference between the four test specimens with regard to cervical marginal discrepancy. The cervical and incisal marginal discrepancy scored F = 243.408, P < 0.001 and F = 180.844, P < 0.001, respectively. Conclusion: This study concluded veneers fabricated using leucite reinforced lithium disilicate exhibited the least marginal discrepancy followed by lithium disilicate ceramic, aluminous porcelain, and zirconia-based ceramics. The marginal discrepancy was more in the incisal region than in the cervical region in all the groups. PMID:28839415
Sukumaran, Jeet; Economo, Evan P; Lacey Knowles, L
2016-05-01
Current statistical biogeographical analysis methods are limited in the ways ecology can be related to the processes of diversification and geographical range evolution, requiring conflation of geography and ecology, and/or assuming ecologies that are uniform across all lineages and invariant in time. This precludes the possibility of studying a broad class of macroevolutionary biogeographical theories that relate geographical and species histories through lineage-specific ecological and evolutionary dynamics, such as taxon cycle theory. Here we present a new model that generates phylogenies under a complex of superpositioned geographical range evolution, trait evolution, and diversification processes that can communicate with each other. We present a likelihood-free method of inference under our model using discriminant analysis of principal components of summary statistics calculated on phylogenies, with the discriminant functions trained on data generated by simulations under our model. This approach of model selection by classification of empirical data with respect to data generated under training models is shown to be efficient, robust, and performs well over a broad range of parameter space defined by the relative rates of dispersal, trait evolution, and diversification processes. We apply our method to a case study of the taxon cycle, that is testing for habitat and trophic level constraints in the dispersal regimes of the Wallacean avifaunal radiation. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Maritime Tactical Command and Control Analysis of Alternatives
2016-01-01
JIIM joint, interagency, intergovernmental, and multinational LCC life-cycle cost MANA Map Aware Non-Uniform Automata MDA milestone decision authority...Map Aware Non-Uniform Automata (MANA), a combat and C4I, surveillance, and reconnaissance model developed by the New Zealand Defence Technology
Improvement of a plasma uniformity of the 2nd ion source of KSTAR neutral beam injector.
Jeong, S H; Kim, T S; Lee, K W; Chang, D H; In, S R; Bae, Y S
2014-02-01
The 2nd ion source of KSTAR (Korea Superconducting Tokamak Advanced Research) NBI (Neutral Beam Injector) had been developed and operated since last year. A calorimetric analysis revealed that the heat load of the back plate of the ion source is relatively higher than that of the 1st ion source of KSTAR NBI. The spatial plasma uniformity of the ion source is not good. Therefore, we intended to identify factors affecting the uniformity of a plasma density and improve it. We estimated the effects of a direction of filament current and a magnetic field configuration of the plasma generator on the plasma uniformity. We also verified that the operation conditions of an ion source could change a uniformity of the plasma density of an ion source.
Effects of beam irregularity on uniform scanning
NASA Astrophysics Data System (ADS)
Kim, Chang Hyeuk; Jang, Sea duk; Yang, Tae-Keun
2016-09-01
An active scanning beam delivery method has many advantages in particle beam applications. For the beam is to be successfully delivered to the target volume by using the active scanning technique, the dose uniformity must be considered and should be at least 2.5% in the case of therapy application. During beam irradiation, many beam parameters affect the 2-dimensional uniformity at the target layer. A basic assumption in the beam irradiation planning stage is that the shape of the beam is symmetric and follows a Gaussian distribution. In this study, a pure Gaussian-shaped beam distribution was distorted by adding parasitic Gaussian distribution. An appropriate uniform scanning condition was deduced by using a quantitative analysis based on the gamma value of the distorted beam and 2-dimensional uniformities.
Simultaneous Use of Multiple Answer Copying Indexes to Improve Detection Rates
ERIC Educational Resources Information Center
Wollack, James A.
2006-01-01
Many of the currently available statistical indexes to detect answer copying lack sufficient power at small [alpha] levels or when the amount of copying is relatively small. Furthermore, there is no one index that is uniformly best. Depending on the type or amount of copying, certain indexes are better than others. The purpose of this article was…
Comparing The Effectiveness of a90/95 Calculations (Preprint)
2006-09-01
Nachtsheim, John Neter, William Li, Applied Linear Statistical Models , 5th ed., McGraw-Hill/Irwin, 2005 5. Mood, Graybill and Boes, Introduction...curves is based on methods that are only valid for ordinary linear regression. Requirements for a valid Ordinary Least-Squares Regression Model There... linear . For example is a linear model ; is not. 2. Uniform variance (homoscedasticity
Generation of excited coherent states for a charged particle in a uniform magnetic field
NASA Astrophysics Data System (ADS)
Mojaveri, B.; Dehghani, A.
2015-04-01
We introduce excited coherent states, |β , α ; n| ≔ a† n | β , α|, where n is an integer and states |β , α| denote the coherent states of a charged particle in a uniform magnetic field. States |β , α| minimize the Schrödinger-Robertson uncertainty relation while having the nonclassical properties. It has been shown that the resolution of identity condition is realized with respect to an appropriate measure on the complex plane. Some of the nonclassical features such as sub-Poissonian statistics and quadrature squeezing of these states are investigated. Our results are compared with similar Agarwal's type photon added coherent states (PACSs) and it is shown that, while photon-counting statistics of |β , α , n| are the same as PACSs, their squeezing properties are different. It is also shown that for large values of |β|, while they are squeezed, they minimize the uncertainty condition. Additionally, it has been demonstrated that by changing the magnitude of the external magnetic field, Bext, the squeezing effect is transferred from one component to another. Finally, a new scheme is proposed to generate states |beta; , α ; n| in cavities.
A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)
Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.
2017-01-01
Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.
Stress Analysis of Composite Cylindrical Shells with an Elliptical Cutout
NASA Technical Reports Server (NTRS)
Oterkus, E.; Madenci, E.; Nemeth, M. P.
2007-01-01
A special-purpose, semi-analytical solution method for determining the stress and deformation fields in a thin laminated-composite cylindrical shell with an elliptical cutout is presented. The analysis includes the effects of cutout size, shape, and orientation; non-uniform wall thickness; oval-cross-section eccentricity; and loading conditions. The loading conditions include uniform tension, uniform torsion, and pure bending. The analysis approach is based on the principle of stationary potential energy and uses Lagrange multipliers to relax the kinematic admissibility requirements on the displacement representations through the use of idealized elastic edge restraints. Specifying appropriate stiffness values for the elastic extensional and rotational edge restraints (springs) allows the imposition of the kinematic boundary conditions in an indirect manner, which enables the use of a broader set of functions for representing the displacement fields. Selected results of parametric studies are presented for several geometric parameters that demonstrate that analysis approach is a powerful means for developing design criteria for laminated-composite shells.
Set statistics in conductive bridge random access memory device with Cu/HfO{sub 2}/Pt structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Meiyun; Long, Shibing, E-mail: longshibing@ime.ac.cn; Wang, Guoming
2014-11-10
The switching parameter variation of resistive switching memory is one of the most important challenges in its application. In this letter, we have studied the set statistics of conductive bridge random access memory with a Cu/HfO{sub 2}/Pt structure. The experimental distributions of the set parameters in several off resistance ranges are shown to nicely fit a Weibull model. The Weibull slopes of the set voltage and current increase and decrease logarithmically with off resistance, respectively. This experimental behavior is perfectly captured by a Monte Carlo simulator based on the cell-based set voltage statistics model and the Quantum Point Contact electronmore » transport model. Our work provides indications for the improvement of the switching uniformity.« less
Heterogeneity, histological features and DNA ploidy in oral carcinoma by image-based analysis.
Diwakar, N; Sperandio, M; Sherriff, M; Brown, A; Odell, E W
2005-04-01
Oral squamous carcinomas appear heterogeneous on DNA ploidy analysis. However, this may be partly a result of sample dilution or the detection limit of techniques. The aim of this study was to determine whether oral squamous carcinomas are heterogeneous for ploidy status using image-based ploidy analysis and to determine whether ploidy status correlates with histological parameters. Multiple samples from 42 oral squamous carcinomas were analysed for DNA ploidy using an image-based system and scored for histological parameters. 22 were uniformly aneuploid, 1 uniformly tetraploid and 3 uniformly diploid. 16 appeared heterogeneous but only 8 appeared to be genuinely heterogeneous when minor ploidy histogram peaks were taken into account. Ploidy was closely related to nuclear pleomorphism but not differentiation. Sample variation, detection limits and diagnostic criteria account for much of the ploidy heterogeneity observed. Confident diagnosis of diploid status in an oral squamous cell carcinoma requires a minimum of 5 samples.
The Necessity of the Hippocampus for Statistical Learning
Covington, Natalie V.; Brown-Schmidt, Sarah; Duff, Melissa C.
2018-01-01
Converging evidence points to a role for the hippocampus in statistical learning, but open questions about its necessity remain. Evidence for necessity comes from Schapiro and colleagues who report that a single patient with damage to hippocampus and broader medial temporal lobe cortex was unable to discriminate new from old sequences in several statistical learning tasks. The aim of the current study was to replicate these methods in a larger group of patients who have either damage localized to hippocampus or a broader medial temporal lobe damage, to ascertain the necessity of the hippocampus in statistical learning. Patients with hippocampal damage consistently showed less learning overall compared with healthy comparison participants, consistent with an emerging consensus for hippocampal contributions to statistical learning. Interestingly, lesion size did not reliably predict performance. However, patients with hippocampal damage were not uniformly at chance and demonstrated above-chance performance in some task variants. These results suggest that hippocampus is necessary for statistical learning levels achieved by most healthy comparison participants but significant hippocampal pathology alone does not abolish such learning. PMID:29308986
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widesott, Lamberto, E-mail: widesott@yahoo.it; Pierelli, Alessio; Fiorino, Claudio
2011-08-01
Purpose: To compare intensity-modulated proton therapy (IMPT) and helical tomotherapy (HT) treatment plans for high-risk prostate cancer (HRPCa) patients. Methods and Materials: The plans of 8 patients with HRPCa treated with HT were compared with IMPT plans with two quasilateral fields set up (-100{sup o}; 100{sup o}) and optimized with the Hyperion treatment planning system. Both techniques were optimized to simultaneously deliver 74.2 Gy/Gy relative biologic effectiveness (RBE) in 28 fractions on planning target volumes (PTVs)3-4 (P + proximal seminal vesicles), 65.5 Gy/Gy(RBE) on PTV2 (distal seminal vesicles and rectum/prostate overlapping), and 51.8 Gy/Gy(RBE) to PTV1 (pelvic lymph nodes). Normalmore » tissue calculation probability (NTCP) calculations were performed for the rectum, and generalized equivalent uniform dose (gEUD) was estimated for the bowel cavity, penile bulb and bladder. Results: A slightly better PTV coverage and homogeneity of target dose distribution with IMPT was found: the percentage of PTV volume receiving {>=}95% of the prescribed dose (V{sub 95%}) was on average >97% in HT and >99% in IMPT. The conformity indexes were significantly lower for protons than for photons, and there was a statistically significant reduction of the IMPT dosimetric parameters, up to 50 Gy/Gy(RBE) for the rectum and bowel and 60 Gy/Gy(RBE) for the bladder. The NTCP values for the rectum were higher in HT for all the sets of parameters, but the gain was small and in only a few cases statistically significant. Conclusions: Comparable PTV coverage was observed. Based on NTCP calculation, IMPT is expected to allow a small reduction in rectal toxicity, and a significant dosimetric gain with IMPT, both in medium-dose and in low-dose range in all OARs, was observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, S; Shulkin, B
Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed withmore » the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co-localization of hybrid CT anatomy and PET radioisotope uptake.« less
Type-curve estimation of statistical heterogeneity
NASA Astrophysics Data System (ADS)
Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica
2004-04-01
The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.
Bite Protection Analysis of Permethrin-Treated US Military Combat Uniforms
USDA-ARS?s Scientific Manuscript database
Historically, casualties from diseases have greatly outnumbered those from combat during military operations. Since 1951, US military combat uniforms have been chemically treated to protect personnel from arthropod attack. In the 1970s and 1980s, permethrin was one of several insecticides evaluate...
Bite protection analysis of permethrin-treated U.S. Military uniforms
USDA-ARS?s Scientific Manuscript database
Historically, combat casualties from diseases have greatly outnumbered battle injuries received from actual combat during military operations. Since 1951, United States military combat uniforms have been treated within insecticides to protect personnel from arthropod attack. In the 1970s and 1980s,...
Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng
2012-10-08
In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.
Large Eddy Simulations of a Bottom Boundary Layer Under a Shallow Geostrophic Front
NASA Astrophysics Data System (ADS)
Bateman, S. P.; Simeonov, J.; Calantoni, J.
2017-12-01
The unstratified surf zone and the stratified shelf waters are often separated by dynamic fronts that can strongly impact the character of the Ekman bottom boundary layer. Here, we use large eddy simulations to study the turbulent bottom boundary layer associated with a geostrophic current on a stratified shelf of uniform depth. The simulations are initialized with a spatially uniform vertical shear that is in geostrophic balance with a pressure gradient due to a linear horizontal temperature variation. Superposed on the temperature front is a stable vertical temperature gradient. As turbulence develops near the bottom, the turbulence-induced mixing gradually erodes the initial uniform temperature stratification and a well-mixed layer grows in height until the turbulence becomes fully developed. The simulations provide the spatial distribution of the turbulent dissipation and the Reynolds stresses in the fully developed boundary layer. We vary the initial linear stratification and investigate its effect on the height of the bottom boundary layer and the turbulence statistics. The results are compared to previous models and simulations of stratified bottom Ekman layers.
Aligned metal absorbers and the ultraviolet background at the end of reionization
NASA Astrophysics Data System (ADS)
Doughty, Caitlin; Finlator, Kristian; Oppenheimer, Benjamin D.; Davé, Romeel; Zackrisson, Erik
2018-04-01
We use observations of spatially aligned C II, C IV, Si II, Si IV, and O I absorbers to probe the slope and intensity of the ultraviolet background (UVB) at z ˜ 6. We accomplish this by comparing observations with predictions from a cosmological hydrodynamic simulation using three trial UVBs applied in post-processing: a spectrally soft, fluctuating UVB calculated using multifrequency radiative transfer; a soft, spatially uniform UVB; and a hard, spatially uniform `quasars-only' model. When considering our paired high-ionization absorbers (C IV/Si IV), the observed statistics strongly prefer the hard, spatially uniform UVB. This echoes recent findings that cosmological simulations generically underproduce strong C IV absorbers at z > 5. A single low/high ionization pair (Si II/Si IV), by contrast, shows a preference for the HM12 UVB, whereas two more (C II/C IV and O I/C IV) show no preference for any of the three UVBs. Despite this, future observations of specific absorbers, particularly Si IV/C IV, with next-generation telescopes probing to lower column densities should yield tighter constraints on the UVB.
NASA Astrophysics Data System (ADS)
Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.
2012-03-01
Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.
Poincaré recurrence statistics as an indicator of chaos synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boev, Yaroslav I., E-mail: boev.yaroslav@gmail.com; Vadivasova, Tatiana E., E-mail: vadivasovate@yandex.ru; Anishchenko, Vadim S., E-mail: wadim@info.sgu.ru
The dynamics of the autonomous and non-autonomous Rössler system is studied using the Poincaré recurrence time statistics. It is shown that the probability distribution density of Poincaré recurrences represents a set of equidistant peaks with the distance that is equal to the oscillation period and the envelope obeys an exponential distribution. The dimension of the spatially uniform Rössler attractor is estimated using Poincaré recurrence times. The mean Poincaré recurrence time in the non-autonomous Rössler system is locked by the external frequency, and this enables us to detect the effect of phase-frequency synchronization.
On the determination of certain astronomical, selenodesic, and gravitational parameters of the moon
NASA Technical Reports Server (NTRS)
Aleksashin, Y. P.; Ziman, Y. L.; Isavnina, I. V.; Krasikov, V. A.; Nepoklonov, B. V.; Rodionov, B. N.; Tischenko, A. P.
1974-01-01
A method was examined for joint construction of a selenocentric fundamental system which can be realized by a coordinate catalog of reference contour points uniformly positioned over the entire lunar surface, and determination of the parameters characterizing the gravitational field, rotation, and orbital motion of the moon. Characteristic of the problem formulation is the introduction of a new complex of inconometric measurements which can be made using pictures obtained from an artificial lunar satellite. The proposed method can be used to solve similar problems on any other planet for which surface images can be obtained from a spacecraft. Characteristic of the proposed technique for solving the problem is the joint statistical analysis of all forms of measurements: orbital iconometric, earth-based trajectory, and also a priori information on the parameters in question which is known from earth-based astronomical studies.
A Catalog of Averaged Magnetic Curves
NASA Astrophysics Data System (ADS)
Bychkov, V. D.; Bychkova, L. V.; Madej, J.
2017-06-01
The second version of the catalog contains information about 275 stars of different types. Since the first catalog was created, the situation fundamentally changed primarily due to a significant increase of accuracy of magnetic field (MF) measurements. Up to now, global magnetic fields were discovered and measured in stars of many types and their behavior was partially studied. Magnetic behavior of Ap/Bp stars was studied most thoroughly. The catalog contains data on 182 such objects. The main goals for the construction of the catalog are: 1) to review and summarize our knowledge about magnetic behavior of stars of different types; 2) the whole data are uniformly presented and processed which will allow one to perform statistical analysis of the variability of (longitudinal) magnetic fields of stars; 3) the data are presented in the most convenient way for testing different theoretical models; 4) the catalog will be useful for development of observational programs.
Self-organization of cosmic radiation pressure instability
NASA Technical Reports Server (NTRS)
Hogan, Craig J.
1991-01-01
Under some circumstances the absorption of radiation momentum by an absorbing medium opens the possibility of a dynamical instability, sometimes called 'mock gravity'. Here, a simplified abstract model is studied in which the radiation source is assumed to remain spatially uniform, there is no reabsorption or reradiated light, and no forces other than radiative pressure act on the absorbing medium. It is shown that this model displays the unique feature of being not only unstable, but also self-organizing. The structure approaches a statistical dynamical steady state which is almost independent of initial conditions. In this saturated state the absorbers are concentrated in thin walls around empty bubbles; as the instability develops the big bubbles get bigger and the small ones get crushed and disappear. A linear analysis shows that to first order the thin walls are indeed stable structures. It is speculated that this instability may play a role in forming cosmic large-scale structure.
Theoretical and observational analysis of spacecraft fields
NASA Technical Reports Server (NTRS)
Neubauer, F. M.; Schatten, K. H.
1972-01-01
In order to investigate the nondipolar contributions of spacecraft magnetic fields a simple magnetic field model is proposed. This model consists of randomly oriented dipoles in a given volume. Two sets of formulas are presented which give the rms-multipole field components, for isotropic orientations of the dipoles at given positions and for isotropic orientations of the dipoles distributed uniformly throughout a cube or sphere. The statistical results for an 8 cu m cube together with individual examples computed numerically show the following features: Beyond about 2 to 3 m distance from the center of the cube, the field is dominated by an equivalent dipole. The magnitude of the magnetic moment of the dipolar part is approximated by an expression for equal magnetic moments or generally by the Pythagorean sum of the dipole moments. The radial component is generally greater than either of the transverse components for the dipole portion as well as for the nondipolar field contributions.
Re-Evaluation of Event Correlations in Virtual California Using Statistical Analysis
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Heflin, M. B.; Granat, R. A.; Yikilmaz, M. B.; Heien, E.; Rundle, J.; Donnellan, A.
2010-12-01
Fusing the results of simulation tools with statistical analysis methods has contributed to our better understanding of the earthquake process. In a previous study, we used a statistical method to investigate emergent phenomena in data produced by the Virtual California earthquake simulator. The analysis indicated that there were some interesting fault interactions and possible triggering and quiescence relationships between events. We have converted the original code from Matlab to python/C++ and are now evaluating data from the most recent version of Virtual California in order to analyze and compare any new behavior exhibited by the model. The Virtual California earthquake simulator can be used to study fault and stress interaction scenarios for realistic California earthquakes. The simulation generates a synthetic earthquake catalog of events with a minimum size of ~M 5.8 that can be evaluated using statistical analysis methods. Virtual California utilizes realistic fault geometries and a simple Amontons - Coulomb stick and slip friction law in order to drive the earthquake process by means of a back-slip model where loading of each segment occurs due to the accumulation of a slip deficit at the prescribed slip rate of the segment. Like any complex system, Virtual California may generate emergent phenomena unexpected even by its designers. In order to investigate this, we have developed a statistical method that analyzes the interaction between Virtual California fault elements and thereby determine whether events on any given fault elements show correlated behavior. Our method examines events on one fault element and then determines whether there is an associated event within a specified time window on a second fault element. Note that an event in our analysis is defined as any time an element slips, rather than any particular “earthquake” along the entire fault length. Results are then tabulated and then differenced with an expected correlation, calculated by assuming a uniform distribution of events in time. We generate a correlation score matrix, which indicates how weakly or strongly correlated each fault element is to every other in the course of the VC simulation. We calculate correlation scores by summing the difference between the actual and expected correlations over all time window lengths and normalizing by the time window size. The correlation score matrix can focus attention on the most interesting areas for more in-depth analysis of event correlation vs. time. The previous study included 59 faults (639 elements) in the model, which included all the faults save the creeping section of the San Andreas. The analysis spanned 40,000 yrs of Virtual California-generated earthquake data. The newly revised VC model includes 70 faults, 8720 fault elements, and spans 110,000 years. Due to computational considerations, we will evaluate the elements comprising the southern California region, which our previous study indicated showed interesting fault interaction and event triggering/quiescence relationships.
NASA Astrophysics Data System (ADS)
Yan, Jin; Song, Xiao; Gong, Guanghong
2016-02-01
We describe a metric named averaged ratio between complementary profiles to represent the distortion of map projections, and the shape regularity of spherical cells derived from map projections or non-map-projection methods. The properties and statistical characteristics of our metric are investigated. Our metric (1) is a variable of numerical equivalence to both scale component and angular deformation component of Tissot indicatrix, and avoids the invalidation when using Tissot indicatrix and derived differential calculus for evaluating non-map-projection based tessellations where mathematical formulae do not exist (e.g., direct spherical subdivisions), (2) exhibits simplicity (neither differential nor integral calculus) and uniformity in the form of calculations, (3) requires low computational cost, while maintaining high correlation with the results of differential calculus, (4) is a quasi-invariant under rotations, and (5) reflects the distortions of map projections, distortion of spherical cells, and the associated distortions of texels. As an indicator of quantitative evaluation, we investigated typical spherical tessellation methods, some variants of tessellation methods, and map projections. The tessellation methods we evaluated are based on map projections or direct spherical subdivisions. The evaluation involves commonly used Platonic polyhedrons, Catalan polyhedrons, etc. Quantitative analyses based on our metric of shape regularity and an essential metric of area uniformity implied that (1) Uniform Spherical Grids and its variant show good qualities in both area uniformity and shape regularity, and (2) Crusta, Unicube map, and a variant of Unicube map exhibit fairly acceptable degrees of area uniformity and shape regularity.
Test plane uniformity analysis for the MSFC solar simulator lamp array
NASA Technical Reports Server (NTRS)
Griner, D. B.
1976-01-01
A preliminary analysis was made on the solar simulator lamp array. It is an array of 405 tungsten halogen lamps with Fresnel lenses to achieve the required spectral distribution and collimation. A computer program was developed to analyze lamp array performance at the test plane. Measurements were made on individual lamp lens combinations to obtain data for the computer analysis. The analysis indicated that the performance of the lamp array was about as expected, except for a need to position the test plane within 2.7 m of the lamp array to achieve the desired 7 percent uniformity of illumination tolerance.
Conducting Meta-Analyses Based on p Values
van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.
2016-01-01
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466
NASA Astrophysics Data System (ADS)
Rao, Zhiming; He, Zhifang; Du, Jianqiang; Zhang, Xinyou; Ai, Guoping; Zhang, Chunqiang; Wu, Tao
2012-03-01
This paper applied numerical simulation of temperature by using finite element analysis software Ansys to study a model of drilling on sticking plaster. The continuous CO2 laser doing uniform linear motion and doing uniform circular motion irradiated sticking plaster to vaporize. The sticking plaster material was chosen as the thermal conductivity, the heat capacity and the density. For temperatures above 450 °C, sticking plaster would be vaporized. Based on the mathematical model of heat transfer, the process of drilling sticking plaster by laser beams could be simulated by Ansys. The simulation results showed the distribution of the temperature at the surface of the sticking plaster with the time of vaporizing at CO2 laser to do uniform linear motion and to do uniform circular motion. The temperature of sticking plaster CO2 laser to do uniform linear motion was higher than CO2 laser to do uniform circular motion in the same condition.
Transcriptome analysis for pork color – the ham halo effect in biceps femoris
USDA-ARS?s Scientific Manuscript database
Pork color is a major indicator of product quality that guides consumer purchasing decisions. For hams, consumers prefer a uniform pink color. Recently, industry has received an increase in consumer complaints about the lightness and non-uniformity of ham color, primarily lighter color in the periph...
School Uniforms: A Qualitative Analysis of Aims and Accomplishments at Two Christian Schools
ERIC Educational Resources Information Center
Firmin, Michael; Smith, Suzanne; Perry, Lynsey
2006-01-01
Employing rigorous qualitative research methodology, we studied the implementation of two schools' uniform policies. Their primary intents were to eliminate competition, teach young people to dress appropriately, decrease nonacademic distractions, and lower the parental clothing costs. The young people differed with adults regarding whether or not…
2012-12-27
of Work UCC Uniform Commercial Code USD(AT&L) Under Secretary of Defense for Acquisition, Technology, and Logistics WBS Work Breakdown Structure...intensive career field. The FAR, the DFARS, and other federal agency supplements of the FAR, the Uniform Commercial Code ( UCC ), installation guidelines
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
NASA Astrophysics Data System (ADS)
Kobulnicky, Henry A.; Kiminki, Daniel C.; Lundquist, Michael J.; Burke, Jamison; Chapman, James; Keller, Erica; Lester, Kathryn; Rolen, Emily K.; Topel, Eric; Bhattacharjee, Anirban; Smullen, Rachel A.; Vargas Álvarez, Carlos A.; Runnoe, Jessie C.; Dale, Daniel A.; Brotherton, Michael M.
2014-08-01
We analyze orbital solutions for 48 massive multiple-star systems in the Cygnus OB2 association, 23 of which are newly presented here, to find that the observed distribution of orbital periods is approximately uniform in log P for P < 45 days, but it is not scale-free. Inflections in the cumulative distribution near 6 days, 14 days, and 45 days suggest key physical scales of sime0.2, sime0.4, and sime1 A.U. where yet-to-be-identified phenomena create distinct features. No single power law provides a statistically compelling prescription, but if features are ignored, a power law with exponent β ~= -0.22 provides a crude approximation over P = 1.4-2000 days, as does a piece-wise linear function with a break near 45 days. The cumulative period distribution flattens at P > 45 days, even after correction for completeness, indicating either a lower binary fraction or a shift toward low-mass companions. A high degree of similarity (91% likelihood) between the Cyg OB2 period distribution and that of other surveys suggests that the binary properties at P <~ 25 days are determined by local physics of disk/clump fragmentation and are relatively insensitive to environmental and evolutionary factors. Fully 30% of the unbiased parent sample is a binary with period P < 45 days. Completeness corrections imply a binary fraction near 55% for P < 5000 days. The observed distribution of mass ratios 0.2 < q < 1 is consistent with uniform, while the observed distribution of eccentricities 0.1 < e < 0.6 is consistent with uniform plus an excess of e ~= 0 systems. We identify six stars, all supergiants, that exhibit aperiodic velocity variations of ~30 km s-1 attributed to atmospheric fluctuations.
Shinozaki, Kazuma; Zack, Jason W.; Pylypenko, Svitlana; ...
2015-09-17
Platinum electrocatalysts supported on high surface area and Vulcan carbon blacks (Pt/HSC, Pt/V) were characterized in rotating disk electrode (RDE) setups for electrochemical area (ECA) and oxygen reduction reaction (ORR) area specific activity (SA) and mass specific activity (MA) at 0.9 V. Films fabricated using several ink formulations and film-drying techniques were characterized for a statistically significant number of independent samples. The highest quality Pt/HSC films exhibited MA 870 ± 91 mA/mgPt and SA 864 ± 56 μA/cm 2 Pt while Pt/V had MA 706 ± 42 mA/mgPt and SA 1120 ± 70 μA/cm 2 Pt when measured in 0.1more » M HClO 4, 20 mV/s, 100 kPa O 2 and 23±2°C. An enhancement factor of 2.8 in themeasured SA was observable on eliminating Nafion ionomer and employing extremely thin, uniform films (~4.5 μg/cm 2 Pt) of Pt/HSC. The ECA for Pt/HSC (99 ± 7 m2/gPt) and Pt/V (65 ± 5 m 2/gPt) were statistically invariant and insensitive to film uniformity/thickness/fabrication technique; accordingly, enhancements in MA are wholly attributable to increases in SA. Impedance measurements coupled with scanning electron microscopy were used to de-convolute the losses within the catalyst layer and ascribed to the catalyst layer resistance, oxygen diffusion, and sulfonate anion adsorption/blocking. The ramifications of these results for proton exchange membrane fuel cells have also been examined.« less
Micro-mechanics of hydro-mechanical coupled processes during hydraulic fracturing in sandstone
NASA Astrophysics Data System (ADS)
Caulk, R.; Tomac, I.
2017-12-01
This contribution presents micro-mechanical study of hydraulic fracture initiation and propagation in sandstone. The Discrete Element Method (DEM) Yade software is used as a tool to model fully coupled hydro-mechanical behavior of the saturated sandstone under pressures typical for deep geo-reservoirs. Heterogeneity of sandstone strength tensile and shear parameters are introduced using statistical representation of cathodoluminiscence (CL) sandstone rock images. Weibull distribution of statistical parameter values was determined as a best match of the CL scans of sandstone grains and cement between grains. Results of hydraulic fracturing stimulation from the well bore indicate significant difference between models with the bond strengths informed from CL scans and uniform homogeneous representation of sandstone parameters. Micro-mechanical insight reveals formed hydraulic fracture typical for mode I or tensile cracking in both cases. However, the shear micro-cracks are abundant in the CL informed model while they are absent in the standard model with uniform strength distribution. Most of the mode II cracks, or shear micro-cracks, are not part of the main hydraulic fracture and occur in the near-tip and near-fracture areas. The position and occurrence of the shear micro-cracks is characterized as secondary effect which dissipates the hydraulic fracturing energy. Additionally, the shear micro-crack locations qualitatively resemble acoustic emission cloud of shear cracks frequently observed in hydraulic fracturing, and sometimes interpreted as re-activation of existing fractures. Clearly, our model does not contain pre-existing cracks and has continuous nature prior to fracturing. This observation is novel and interesting and is quantified in the paper. The shear particle contact forces field reveals significant relaxation compared to the model with uniform strength distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinozaki, Kazuma; Zack, Jason W.; Pylypenko, Svitlana
Platinum electrocatalysts supported on high surface area and Vulcan carbon blacks (Pt/HSC, Pt/V) were characterized in rotating disk electrode (RDE) setups for electrochemical area (ECA) and oxygen reduction reaction (ORR) area specific activity (SA) and mass specific activity (MA) at 0.9 V. Films fabricated using several ink formulations and film-drying techniques were characterized for a statistically significant number of independent samples. The highest quality Pt/HSC films exhibited MA 870 ± 91 mA/mgPt and SA 864 ± 56 μA/cm 2 Pt while Pt/V had MA 706 ± 42 mA/mgPt and SA 1120 ± 70 μA/cm 2 Pt when measured in 0.1more » M HClO 4, 20 mV/s, 100 kPa O 2 and 23±2°C. An enhancement factor of 2.8 in themeasured SA was observable on eliminating Nafion ionomer and employing extremely thin, uniform films (~4.5 μg/cm 2 Pt) of Pt/HSC. The ECA for Pt/HSC (99 ± 7 m2/gPt) and Pt/V (65 ± 5 m 2/gPt) were statistically invariant and insensitive to film uniformity/thickness/fabrication technique; accordingly, enhancements in MA are wholly attributable to increases in SA. Impedance measurements coupled with scanning electron microscopy were used to de-convolute the losses within the catalyst layer and ascribed to the catalyst layer resistance, oxygen diffusion, and sulfonate anion adsorption/blocking. The ramifications of these results for proton exchange membrane fuel cells have also been examined.« less
Doddridge, Greg D; Shi, Zhenqi
2015-01-01
Since near infrared spectroscopy (NIRS) was introduced to the pharmaceutical industry, efforts have been spent to leverage the power of chemometrics to extract out the best possible signal to correlate with the analyte of the interest. In contrast, only a few studies addressed the potential impact of instrument parameters, such as resolution and co-adds (i.e., the number of averaged replicate spectra), on the method performance of error statistics. In this study, a holistic approach was used to evaluate the effect of the instrument parameters of a FT-NIR spectrometer on the performance of a content uniformity method with respect to a list of figures of merit. The figures of merit included error statistics, signal-to-noise ratio (S/N), sensitivity, analytical sensitivity, effective resolution, selectivity, limit of detection (LOD), and noise. A Bruker MPA FT-NIR spectrometer was used for the investigation of an experimental design in terms of resolution (4 cm(-1) and 32 cm(-1)) and co-adds (256 and 16) plus a center point at 8 cm(-1) and 32 co-adds. Given the balance among underlying chemistry, instrument parameters, chemometrics, and measurement time, 8 cm(-1) and 32 co-adds in combination with appropriate 2nd derivative preprocessing was found to fit best for the intended purpose as a content uniformity method. The considerations for optimizing both instrument parameters and chemometrics were proposed and discussed in order to maximize the method performance for its intended purpose for future NIRS method development in R&D. Copyright © 2014 Elsevier B.V. All rights reserved.
Comparative genomics of defense systems in archaea and bacteria
Makarova, Kira S.; Wolf, Yuri I.; Koonin, Eugene V.
2013-01-01
Our knowledge of prokaryotic defense systems has vastly expanded as the result of comparative genomic analysis, followed by experimental validation. This expansion is both quantitative, including the discovery of diverse new examples of known types of defense systems, such as restriction-modification or toxin-antitoxin systems, and qualitative, including the discovery of fundamentally new defense mechanisms, such as the CRISPR-Cas immunity system. Large-scale statistical analysis reveals that the distribution of different defense systems in bacterial and archaeal taxa is non-uniform, with four groups of organisms distinguishable with respect to the overall abundance and the balance between specific types of defense systems. The genes encoding defense system components in bacterial and archaea typically cluster in defense islands. In addition to genes encoding known defense systems, these islands contain numerous uncharacterized genes, which are candidates for new types of defense systems. The tight association of the genes encoding immunity systems and dormancy- or cell death-inducing defense systems in prokaryotic genomes suggests that these two major types of defense are functionally coupled, providing for effective protection at the population level. PMID:23470997
[Trend analysis of acquired syphilis in Mexico from 2003 to 2013].
Herrera-Ortiz, Antonia; Uribe-Salas, Felipe J; Olamendi-Portugal, Ma Leonidez; García-Cisneros, Santa; Conde-Glez, Carlos Jesús; Sánchez-Alemán, Miguel A
2015-01-01
To identify the population group in which syphilis increase was concentrated. The information was collected from the Mexico health statistical yearbooks. The information disaggregated by sex, age group and state during the period 2003 to 2013 was used to form different databases. Linear regression analysis with confidence interval at 95% was used to evaluate changes over time in different population groups. An increase of 0.67 cases per 100,000 population (95%CI 0.30-1.04) in men was detected from 2010. The increase was concentrated in each group of 20-24 and 25-44. The highest incidence of acquired syphilis was reported in the last two years: 2012 and 2013. The last year reported a 1.85 times higher incidence than reported in 2003. Aguascalientes, Distrito Federal, Durango, Mexico, Oaxaca, Puebla, Quintana Roo, Yucatan and Zacatecas reported that syphilis increased during the study period. Acquired syphilis may be reemerging in our country among young men; this increase is not uniform across the country, it is necessary to focus intervention measures for this sexually transmitted infection.
NASA Astrophysics Data System (ADS)
Zhang, Dong-Hai; Chen, Yan-Ling; Wang, Guo-Rong; Li, Wang-Dong; Wang, Qing; Yao, Ji-Jie; Zhou, Jian-Guo; Zheng, Su-Hua; Xu, Li-Ling; Miao, Hui-Feng; Wang, Peng
2014-07-01
Multiplicity fluctuation of the target evaporated fragments emitted in 290 MeV/u 12C-AgBr, 400 MeV/u 12C-AgBr, 400 MeV/u 20Ne-AgBr and 500 MeV/u 56Fe-AgBr interactions is investigated using the scaled factorial moment method in two-dimensional normal phase space and cumulative variable space, respectively. It is found that in normal phase space the scaled factorial moment (ln
NASA Astrophysics Data System (ADS)
Abd Kadir, N.; Aminanda, Y.; Ibrahim, M. S.; Mokhtar, H.
2016-10-01
A statistical analysis was performed to evaluate the effect of factor and to obtain the optimum configuration of Kraft paper honeycomb. The factors considered in this study include density of paper, thickness of paper and cell size of honeycomb. Based on three level factorial design, two-factor interaction model (2FI) was developed to correlate the factors with specific energy absorption and specific compression strength. From the analysis of variance (ANOVA), the most influential factor on responses and the optimum configuration was identified. After that, Kraft paper honeycomb with optimum configuration is used to fabricate foam-filled paper honeycomb with five different densities of polyurethane foam as filler (31.8, 32.7, 44.5, 45.7, 52 kg/m3). The foam-filled paper honeycomb is subjected to quasi-static compression loading. Failure mechanism of the foam-filled honeycomb was identified, analyzed and compared with the unfilled paper honeycomb. The peak force and energy absorption capability of foam-filled paper honeycomb are increased up to 32% and 30%, respectively, compared to the summation of individual components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Roux, J. A.
Earlier work based on nonlinear guiding center (NLGC) theory suggested that perpendicular cosmic-ray transport is diffusive when cosmic rays encounter random three-dimensional magnetohydrodynamic turbulence dominated by uniform two-dimensional (2D) turbulence with a minor uniform slab turbulence component. In this approach large-scale perpendicular cosmic-ray transport is due to cosmic rays microscopically diffusing along the meandering magnetic field dominated by 2D turbulence because of gyroresonant interactions with slab turbulence. However, turbulence in the solar wind is intermittent and it has been suggested that intermittent turbulence might be responsible for the observation of 'dropout' events in solar energetic particle fluxes on small scales.more » In a previous paper le Roux et al. suggested, using NLGC theory as a basis, that if gyro-scale slab turbulence is intermittent, large-scale perpendicular cosmic-ray transport in weak uniform 2D turbulence will be superdiffusive or subdiffusive depending on the statistical characteristics of the intermittent slab turbulence. In this paper we expand and refine our previous work further by investigating how both parallel and perpendicular transport are affected by intermittent slab turbulence for weak as well as strong uniform 2D turbulence. The main new finding is that both parallel and perpendicular transport are the net effect of an interplay between diffusive and nondiffusive (superdiffusive or subdiffusive) transport effects as a consequence of this intermittency.« less
Just, Sarah; Toschkoff, Gregor; Funke, Adrian; Djuric, Dejan; Scharrer, Georg; Khinast, Johannes; Knop, Klaus; Kleinebudde, Peter
2013-11-30
The objective of this study was to enhance the inter-tablet coating uniformity in an active coating process at lab and pilot scale by statistical design of experiments. The API candesartan cilexetil was applied onto gastrointestinal therapeutic systems containing the API nifedipine to obtain fixed dose combinations of these two drugs with different release profiles. At lab scale, the parameters pan load, pan speed, spray rate and number of spray nozzles were examined. At pilot scale, the parameters pan load, pan speed, spray rate, spray time, and spray pressure were investigated. A low spray rate and a high pan speed improved the coating uniformity at both scales. The number of spray nozzles was identified as the most influential variable at lab scale. With four spray nozzles, the highest CV value was equal to 6.4%, compared to 13.4% obtained with two spray nozzles. The lowest CV of 4.5% obtained with two spray nozzles was further reduced to 2.3% when using four spray nozzles. At pilot scale, CV values between 2.7% and 11.1% were achieved. Since the test of uniformity of dosage units accepts CV values of up to 6.25%, this active coating process is well suited to comply with the pharmacopoeial requirements. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kachenko, Anthony G.; Siegele, Rainer; Bhatia, Naveen P.; Singh, Balwant; Ionescu, Mihail
2008-04-01
Hybanthus floribundus subsp. floribundus, a rare Australian Ni-hyperaccumulating shrub and Pityrogramma calomelanos var. austroamericana, an Australian naturalized As-hyperaccumulating fern are promising species for use in phytoremediation of contaminated sites. Micro-proton-induced X-ray emission (μ-PIXE) spectroscopy was used to map the elemental distribution of the accumulated metal(loid)s, Ca and K in leaf or pinnule tissues of the two plant species. Samples were prepared by two contrasting specimen preparation techniques: freeze-substitution in tetrahydrofuran (THF) and freeze-drying. The specimens were analysed to compare the suitability of each technique in preserving (i) the spatial elemental distribution and (ii) the tissue structure of the specimens. Further, the μ-PIXE results were compared with concentration of elements in the bulk tissue obtained by ICP-AES analysis. In H. floribundus subsp. floribundus, μ-PIXE analysis revealed Ni, Ca and K concentrations in freeze-dried leaf tissues were at par with bulk tissue concentrations. Elemental distribution maps illustrated that Ni was preferentially localised in the adaxial epidermal tissues (1% DW) and least concentration was found in spongy mesophyll tissues (0.53% DW). Conversely, elemental distribution maps of THF freeze-substituted tissues indicated significantly lower Ni, Ca and K concentrations than freeze-dried specimens and bulk tissue concentrations. Moreover, Ni concentrations were uniform across the whole specimen and no localisation was observed. In P. calomelanos var. austroamericana freeze-dried pinnule tissues, μ-PIXE revealed statistically similar As, Ca and K concentrations as compared to bulk tissue concentrations. Elemental distribution maps showed that As localisation was relatively uniform across the whole specimen. Once again, THF freeze-substituted tissues revealed a significant loss of As compared to freeze-dried specimens and the concentrations obtained by bulk tissue analysis. The results demonstrate that freeze-drying is a suitable sample preparation technique to study elemental distribution of ions in H. floribundus and P. calomelanos plant tissues using μ-PIXE spectroscopy. Furthermore, cellular structure was preserved in samples prepared using this technique.
Understanding the origins of uncertainty in landscape-scale variations of emissions of nitrous oxide
NASA Astrophysics Data System (ADS)
Milne, Alice; Haskard, Kathy; Webster, Colin; Truan, Imogen; Goulding, Keith
2014-05-01
Nitrous oxide is a potent greenhouse gas which is over 300 times more radiatively effective than carbon dioxide. In the UK, the agricultural sector is estimated to be responsible for over 80% of nitrous oxide emissions, with these emissions resulting from livestock and farmers adding nitrogen fertilizer to soils. For the purposes of reporting emissions to the IPCC, the estimates are calculated using simple models whereby readily-available national or international statistics are combined with IPCC default emission factors. The IPCC emission factor for direct emissions of nitrous oxide from soils has a very large uncertainty. This is primarily because the variability of nitrous oxide emissions in space is large and this results in uncertainty that may be regarded as sample noise. To both reduce uncertainty through improved modelling, and to communicate an understanding of this uncertainty, we must understand the origins of the variation. We analysed data on nitrous oxide emission rate and some other soil properties collected from a 7.5-km transect across contrasting land uses and parent materials in eastern England. We investigated the scale-dependence and spatial uniformity of the correlations between soil properties and emission rates from farm to landscape scale using wavelet analysis. The analysis revealed a complex pattern of scale-dependence. Emission rates were strongly correlated with a process-specific function of the water-filled pore space at the coarsest scale and nitrate at intermediate and coarsest scales. We also found significant correlations between pH and emission rates at the intermediate scales. The wavelet analysis showed that these correlations were not spatially uniform and that at certain scales changes in parent material coincided with significant changes in correlation. Our results indicate that, at the landscape scale, nitrate content and water-filled pore space are key soil properties for predicting nitrous oxide emissions and should therefore be incorporated into process models and emission factors for inventory calculations.
Computerized Design and Analysis of Face-Milled, Uniform Tooth Height Spiral Bevel Gear Drives
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Wang, Anngwo; Handschuh, R. F.
1996-01-01
Face-milled spiral bevel gears with uniform tooth height are considered. An approach is proposed for the design of low noise and localized bearing contact of such gears. The approach is based on the mismatch of contacting surfaces and permits two types of bearing contact either directed longitudinally or across the surface to be obtained. A Tooth Contact Analysis (TCA) computer program was developed. This analysis was used to determine the influence of misalignment on meshing and contact of the spiral bevel gears. A numerical example that illustrates the developed theory is provided.
Zhu, Yuying; Wang, Jianmin; Wang, Cunfang
2018-05-01
Taking fresh goat milk as raw material after filtering, centrifuging, hollow fiber ultrafiltration, allocating formula, value detection and preparation processing, a set of 10 goat milk mixed standard substances was prepared on the basis of one-factor-at-a-time using a uniform design method, and its accuracy, uniformity and stability were evaluated by paired t-test and F-test of one-way analysis of variance. The results showed that three milk composition contents of these standard products were independent of each other, and the preparation using the quasi-level design method, and without emulsifier was the best program. Compared with detection value by cow milk standards for calibration fast analyzer, the calibration by goat milk mixed standard was more applicable to rapid detection of goat milk composition, detection value was more accurate and the deviation showed less error. Single factor analysis of variance showed that the uniformity and stability of the mixed standard substance were better; it could be stored for 15 days at 4°C. The uniformity and stability of the in-units and inter-units could meet the requirements of the preparation of national standard products. © 2018 Japanese Society of Animal Science.
Flexural torsional buckling of uniformly compressed beam-like structures
NASA Astrophysics Data System (ADS)
Ferretti, M.
2018-02-01
A Timoshenko beam model embedded in a 3D space is introduced for buckling analysis of multi-store buildings, made by rigid floors connected by elastic columns. The beam model is developed via a direct approach, and the constitutive law, accounting for prestress forces, is deduced via a suitable homogenization procedure. The bifurcation analysis for the case of uniformly compressed buildings is then addressed, and numerical results concerning the Timoshenko model are compared with 3D finite element analyses. Finally, some conclusions and perspectives are drawn.
Image contrast of diffraction-limited telescopes for circular incoherent sources of uniform radiance
NASA Technical Reports Server (NTRS)
Shackleford, W. L.
1980-01-01
A simple approximate formula is derived for the background intensity beyond the edge of the image of uniform incoherent circular light source relative to the irradiance near the center of the image. The analysis applies to diffraction-limited telescopes with or without central beam obscuration due to a secondary mirror. Scattering off optical surfaces is neglected. The analysis is expected to be most applicable to spaceborne IR telescopes, for which diffraction can be the major source of off-axis response.
Functional status predicts acute care readmission in the traumatic spinal cord injury population.
Huang, Donna; Slocum, Chloe; Silver, Julie K; Morgan, James W; Goldstein, Richard; Zafonte, Ross; Schneider, Jeffrey C
2018-03-29
Context/objective Acute care readmission has been identified as an important marker of healthcare quality. Most previous models assessing risk prediction of readmission incorporate variables for medical comorbidity. We hypothesized that functional status is a more robust predictor of readmission in the spinal cord injury population than medical comorbidities. Design Retrospective cross-sectional analysis. Setting Inpatient rehabilitation facilities, Uniform Data System for Medical Rehabilitation data from 2002 to 2012 Participants traumatic spinal cord injury patients. Outcome measures A logistic regression model for predicting acute care readmission based on demographic variables and functional status (Functional Model) was compared with models incorporating demographics, functional status, and medical comorbidities (Functional-Plus) or models including demographics and medical comorbidities (Demographic-Comorbidity). The primary outcomes were 3- and 30-day readmission, and the primary measure of model performance was the c-statistic. Results There were a total of 68,395 patients with 1,469 (2.15%) readmitted at 3 days and 7,081 (10.35%) readmitted at 30 days. The c-statistics for the Functional Model were 0.703 and 0.654 for 3 and 30 days. The Functional Model outperformed Demographic-Comorbidity models at 3 days (c-statistic difference: 0.066-0.096) and outperformed two of the three Demographic-Comorbidity models at 30 days (c-statistic difference: 0.029-0.056). The Functional-Plus models exhibited negligible improvements (0.002-0.010) in model performance compared to the Functional models. Conclusion Readmissions are used as a marker of hospital performance. Function-based readmission models in the spinal cord injury population outperform models incorporating medical comorbidities. Readmission risk models for this population would benefit from the inclusion of functional status.
NASA Astrophysics Data System (ADS)
Khoshgoftar, M. J.; Mirzaali, M. J.; Rahimi, G. H.
2015-11-01
Recently application of functionally graded materials(FGMs) have attracted a great deal of interest. These materials are composed of various materials with different micro-structures which can vary spatially in FGMs. Such composites with varying thickness and non-uniform pressure can be used in the aerospace engineering. Therefore, analysis of such composite is of high importance in engineering problems. Thermoelastic analysis of functionally graded cylinder with variable thickness under non-uniform pressure is considered. First order shear deformation theory and total potential energy approach is applied to obtain the governing equations of non-homogeneous cylinder. Considering the inner and outer solutions, perturbation series are applied to solve the governing equations. Outer solution for out of boundaries and more sensitive variable in inner solution at the boundaries are considered. Combining of inner and outer solution for near and far points from boundaries leads to high accurate displacement field distribution. The main aim of this paper is to show the capability of matched asymptotic solution for different non-homogeneous cylinders with different shapes and different non-uniform pressures. The results can be used to design the optimum thickness of the cylinder and also some properties such as high temperature residence by applying non-homogeneous material.
Design of light guide sleeve on hyperspectral imaging system for skin diagnosis
NASA Astrophysics Data System (ADS)
Yan, Yung-Jhe; Chang, Chao-Hsin; Huang, Ting-Wei; Chiang, Hou-Chi; Wu, Jeng-Fu; Ou-Yang, Mang
2017-08-01
A hyperspectral imaging system is proposed for early study of skin diagnosis. A stable and high hyperspectral image quality is important for analysis. Therefore, a light guide sleeve (LGS) was designed for the embedded on a hyperspectral imaging system. It provides a uniform light source on the object plane with the determined distance. Furthermore, it can shield the ambient light from entering the system and increasing noise. For the purpose of producing a uniform light source, the LGS device was designed in the symmetrical double-layered structure. It has light cut structures to adjust distribution of rays between two layers and has the Lambertian surface in the front-end to promote output uniformity. In the simulation of the design, the uniformity of illuminance was about 91.7%. In the measurement of the actual light guide sleeve, the uniformity of illuminance was about 92.5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivasankaran, S., E-mail: sivasankarangs1979@gmail.com; Sivaprasad, K., E-mail: ksp@nitt.edu; Narayanasamy, R., E-mail: narayan@nitt.edu
2011-07-15
Nanocrystalline AA 6061 alloy reinforced with alumina (0, 4, 8, and 12 wt.%) in amorphized state composite powder was synthesized by mechanical alloying and consolidated by conventional powder metallurgy route. The as-milled and as-sintered (573 K and 673 K) nanocomposites were characterized by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The peaks corresponding to fine alumina was not observed by XRD patterns due to amorphization. Using high-resolution transmission electron microscope, it is confirmed that the presence of amorphized alumina observed in Al lattice fringes. The crystallite size, lattice strain, deformation stress, and strain energy density of AA 6061 matrixmore » were determined precisely from the first five most intensive reflection of XRD using simple Williamson-Hall models; uniform deformation model, uniform stress deformation model, and uniform energy density deformation model. Among the developed models, uniform energy density deformation model was observed to be the best fit and realistic model for mechanically alloyed powders. This model evidenced the more anisotropic nature of the ball milled powders. The XRD peaks of as-milled powder samples demonstrated a considerable broadening with percentage of reinforcement due to grain refinement and lattice distortions during same milling time (40 h). The as-sintered (673 K) unreinforced AA 6061 matrix crystallite size from well fitted uniform energy density deformation model was 98 nm. The as-milled and as-sintered (673 K) nanocrystallite matrix sizes for 12 wt.% Al{sub 2}O{sub 3} well fitted by uniform energy density deformation model were 38 nm and 77 nm respectively, which indicate that the fine Al{sub 2}O{sub 3} pinned the matrix grain boundary and prevented the grain growth during sintering. Finally, the lattice parameter of Al matrix in as-milled and as-sintered conditions was also investigated in this paper. Research highlights: {yields} Integral breadth methods using various Williamson-Hall models were investigated for line profile analysis. {yields} Uniform energy density deformation model is observed to the best realistic model. {yields} The present analysis is used for understanding the stress and the strain present in the nanocomposites.« less
Nurses’ uniforms: How many bacteria do they carry after one shift?
Sanon, Marie-Anne; Watkins, Sally
2013-01-01
This pilot study investigated the pathogens that nurses are potentially bringing into the public and their home when they wear work uniforms outside of the work environment. To achieve this, sterilized uniforms were distributed to 10 nurses at a local hospital in Washington State at the beginning of their shift. Worn uniforms were collected at the end of the shifts and sent to a laboratory for analysis. Four tests were conducted: 1) a heterotrophic growth plate count, 2) methicillin-resistant Staphylococcus aureus (MRSA) growth, 3) vancomycin-resistant Enterococci (VRE), and 4) identification of the heterotrophic plate counts. Each participant completed a questionnaire and a survey. The results showed that the average bacteria colony growth per square inch was 1,246 and 5,795 for day and night shift, respectively. After 48 h, MRSA positives were present on 4 of the day shift and 3 of the night shift uniforms. Additional bacteria identified include: Bacillus sp., Micrococcus luteus, Staphylococcus aureus, Staphylococcus epidermidis, and Micrococcus roseus. The significant presence of bacteria on the uniforms 48 h after the shift ended necessitates further study, discussions and policy consideration regarding wearing health care uniforms outside of the work environment. PMID:25285235
Nurses' uniforms: How many bacteria do they carry after one shift?
Sanon, Marie-Anne; Watkins, Sally
2012-12-01
This pilot study investigated the pathogens that nurses are potentially bringing into the public and their home when they wear work uniforms outside of the work environment. To achieve this, sterilized uniforms were distributed to 10 nurses at a local hospital in Washington State at the beginning of their shift. Worn uniforms were collected at the end of the shifts and sent to a laboratory for analysis. Four tests were conducted: 1) a heterotrophic growth plate count, 2) methicillin-resistant Staphylococcus aureus (MRSA) growth, 3) vancomycin-resistant Enterococci (VRE), and 4) identification of the heterotrophic plate counts. Each participant completed a questionnaire and a survey. The results showed that the average bacteria colony growth per square inch was 1,246 and 5,795 for day and night shift, respectively. After 48 h, MRSA positives were present on 4 of the day shift and 3 of the night shift uniforms. Additional bacteria identified include: Bacillus sp., Micrococcus luteus, Staphylococcus aureus, Staphylococcus epidermidis, and Micrococcus roseus. The significant presence of bacteria on the uniforms 48 h after the shift ended necessitates further study, discussions and policy consideration regarding wearing health care uniforms outside of the work environment.
Analysis of the numerical differentiation formulas of functions with large gradients
NASA Astrophysics Data System (ADS)
Tikhovskaya, S. V.
2017-10-01
The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.
Harmonic elastic inclusions in the presence of point moment
NASA Astrophysics Data System (ADS)
Wang, Xu; Schiavone, Peter
2017-12-01
We employ conformal mapping techniques to design harmonic elastic inclusions when the surrounding matrix is simultaneously subjected to remote uniform stresses and a point moment located at an arbitrary position in the matrix. Our analysis indicates that the uniform and hydrostatic stress field inside the inclusion as well as the constant hoop stress along the entire inclusion-matrix interface (on the matrix side) are independent of the action of the point moment. In contrast, the non-elliptical shape of the harmonic inclusion depends on both the remote uniform stresses and the point moment.
Piezoelectric effect in non-uniform strained carbon nanotubes
NASA Astrophysics Data System (ADS)
Ilina, M. V.; Blinov, Yu F.; Ilin, O. I.; Rudyk, N. N.; Ageev, O. A.
2017-10-01
The piezoelectric effect in non-uniform strained carbon nanotubes (CNTs) has been studied. It is shown that the magnitude of strained CNTs surface potential depends on a strain value. It is established that the resistance of CNT also depends on the strain and internal electric field, which leads to the hysteresis in the current-voltage characteristics. Analysis of experimental studies of the non-uniform strained CNT with a diameter of 92 nm and a height of 2.1 μm allowed us to estimate the piezoelectric coefficient 0.107 ± 0.032 C/m2.
NASA Astrophysics Data System (ADS)
Bachmann, M.; Besse, P. A.; Melchior, H.
1995-10-01
Overlapping-image multimode interference (MMI) couplers, a new class of devices, permit uniform and nonuniform power splitting. A theoretical description directly relates coupler geometry to image intensities, positions, and phases. Among many possibilities of nonuniform power splitting, examples of 1 \\times 2 couplers with ratios of 15:85 and 28:72 are given. An analysis of uniform power splitters includes the well-known 2 \\times N and 1 \\times N MMI couplers. Applications of MMI couplers include mode filters, mode splitters-combiners, and mode converters.
Implementing Assessment Engineering in the Uniform Certified Public Accountant (CPA) Examination
ERIC Educational Resources Information Center
Burke, Matthew; Devore, Richard; Stopek, Josh
2013-01-01
This paper describes efforts to bring principled assessment design to a large-scale, high-stakes licensure examination by employing the frameworks of Assessment Engineering (AE), the Revised Bloom's Taxonomy (RBT), and Cognitive Task Analysis (CTA). The Uniform CPA Examination is practice-oriented and focuses on the skills of accounting. In…
Communication Network Integration and Group Uniformity in a Complex Organization.
ERIC Educational Resources Information Center
Danowski, James A.; Farace, Richard V.
This paper contains a discussion of the limitations of research on group processes in complex organizations and the manner in which a procedure for network analysis in on-going systems can reduce problems. The research literature on group uniformity processes and on theoretical models of these processes from an information processing perspective…
Linear Instability Analysis of non-uniform Bubbly Mixing layer with Two-Fluid model
NASA Astrophysics Data System (ADS)
Sharma, Subash; Chetty, Krishna; Lopez de Bertodano, Martin
We examine the inviscid instability of a non-uniform adiabatic bubbly shear layer with a Two-Fluid model. The Two-Fluid model is made well-posed with the closure relations for interfacial forces. First, a characteristic analysis is carried out to study the well posedness of the model over range of void fraction with interfacial forces for virtual mass, interfacial drag, interfacial pressure. A dispersion analysis then allow us to obtain growth rate and wavelength. Then, the well-posed two-fluid model is solved using CFD to validate the results obtained with the linear stability analysis. The effect of the void fraction and the distribution profile on stability is analyzed.
Partl, Richard; Fastner, Gerd; Kaiser, Julia; Kronhuber, Elisabeth; Cetin-Strohmer, Klaudia; Steffal, Claudia; Böhmer-Breitfelder, Barbara; Mayer, Johannes; Avian, Alexander; Berghold, Andrea
2016-02-01
Low Karnofsky performance status (KPS) and elevated lactate dehydrogenases (LDHs) as a surrogate marker for tumor load and cell turnover may depict patients with a very short life expectancy. To validate this finding and compare it to other indices, namely, the recursive partitioning analysis (RPA) and diagnosis-specific graded prognostic assessment (DS-GPA), a multicenter analysis was undertaken. A retrospective analysis of 234 metastatic melanoma patients uniformly treated with palliative whole brain radiotherapy (WBRT) was done. Univariate and multivariate analyses were used to determine the impact of patient-, tumor-, and treatment-related parameters on overall survival (OS). KPS and LDH emerged as independent factors predicting OS. By combining KPS and LDH values (KPS/LDH index), groups of patients with statistically significant differences in median OS (days; 95 % CI) after onset of WBRT were identified: group 1 (KPS ≥ 70/normal LDH) 234 (96-372), group 2 (KPS ≥ 70/elevated LDH) 112 (69-155), group 3 (KPS <70/normal LDH) 43 (12-74), and group 4 (KPS <70/elevated LDH) 29 (17-41). Between all four groups, statistically significant differences were observed. The RPA and DS-GPA indices failed to distinguish significantly between good and moderate prognosis and were inferior in predicting a very unfavorable prognosis. The parameters KPS and LDH independently impacted on OS. The combination of both (KPS/LDH index) identified patients with a very short life expectancy, who might be better served by recommending best supportive care instead of WBRT. The KPS/LDH index is simple and effective in terms of time and cost as compared to other prognostic indices.
NASA Astrophysics Data System (ADS)
Fujita, S.; Yamamoto, T.; Yoshida, M.; Onai, M.; Kojima, A.; Hatayama, A.; Kashiwagi, M.
2017-08-01
In order to improve the uniformity of the negative ion production, the KEIO-MARC code has been applied to the QST's JT60SA negative ion source in three different magnetic configurations (i) MC-PGMF (Multi-Cusp and PG Magnetic Filter), (ii) TNT-MF (TeNT Magnetic Filter) and (iii) MTNT-MF (Modified TeNT Magnetic Filter). From the results, we have confirmed that the electron rotation inside the negative ion source is an essential element in order to obtain a uniform production of the negative ions. By adding extra tent magnets on the longitudinal sides, the electron rotation has been enhanced, and a uniform production of negative ions has been realized.
Pollitz, Fred F.; Thatcher, Wayne R.
2010-01-01
Most models of lower crust/mantle viscosity inferred from postearthquake relaxation assume one or two uniform-viscosity layers. A few existing models possess apparently significant radially variable viscosity structure in the shallow mantle (e.g., the upper 200 km), but the resolution of such variations is not clear. We use a geophysical inverse procedure to address the resolving power of inferred shallow mantle viscosity structure using postearthquake relaxation data. We apply this methodology to 9 years of GPS-constrained crustal motions after the 16 October 1999 M = 7.1 Hector Mine earthquake. After application of a differencing method to isolate the postearthquake signal from the “background” crustal velocity field, we find that surface velocities diminish from ∼20 mm/yr in the first few months to ≲2 mm/yr after 2 years. Viscoelastic relaxation of the mantle, with a time-dependent effective viscosity prescribed by a Burgers body, provides a good explanation for the postseismic crustal deformation, capturing both the spatial and temporal pattern. In the context of the Burgers body model (which involves a transient viscosity and steady state viscosity), a resolution analysis based on the singular value decomposition reveals that at most, two constraints on depth-dependent steady state mantle viscosity are provided by the present data set. Uppermost mantle viscosity (depth ≲ 60 km) is moderately resolved, but deeper viscosity structure is poorly resolved. The simplest model that explains the data better than that of uniform steady state mantle viscosity involves a linear gradient in logarithmic viscosity with depth, with a small increase from the Moho to 220 km depth. However, the viscosity increase is not statistically significant. This suggests that the depth-dependent steady state viscosity is not resolvably different from uniformity in the uppermost mantle.
Otolaryngology Residency Program Research Resources and Scholarly Productivity.
Villwock, Jennifer A; Hamill, Chelsea S; Nicholas, Brian D; Ryan, Jesse T
2017-06-01
Objective To delineate research resources available to otolaryngology residents and their impact on scholarly productivity. Study Design Survey of current otolaryngology program directors. Setting Otolaryngology residency programs. Subjects and Methods An anonymous web-based survey was sent to 98 allopathic otolaryngology training program directors. Fisher exact tests and nonparametric correlations were used to determine statistically significant differences among various strata of programs. Results Thirty-nine percent (n = 38) of queried programs responded. Fourteen (37%) programs had 11 to 15 full-time, academic faculty associated with the residency program. Twenty (53%) programs have a dedicated research coordinator. Basic science lab space and financial resources for statistical work were present at 22 programs (58%). Funding is uniformly provided for presentation of research at conferences; a minority of programs (13%) only funded podium presentations. Twenty-four (63%) have resident research requirements beyond the Accreditation Council for Graduate Medical Education (ACGME) mandate of preparing a "manuscript suitable for publication" prior to graduation. Twenty-five (67%) programs have residents with 2 to 3 active research projects at any given time. None of the investigated resources were significantly associated with increased scholarly output. There was no uniformity to research curricula. Conclusions Otolaryngology residency programs value research, evidenced by financial support provided and requirements beyond the ACGME minimum. Additional resources were not statistically related to an increase in resident research productivity, although they may contribute positively to the overall research experience during training. Potential future areas to examine include research curricula best practices, how to develop meaningful mentorship and resource allocation that inspires continued research interest, and intellectual stimulation.
Testing Modeling Assumptions in the West Africa Ebola Outbreak
NASA Astrophysics Data System (ADS)
Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre
2016-10-01
The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.
The Statistical Assessment of Latent Trait Dimensionality in Psychological Testing
1984-06-01
Thus, aY 2 - c -I E[P1(01)Pil(01)| B1 ] n+Kl^i^n+M M 9 1 frn+M 1 ,1 rn+M ^2 ■TLi<iwWEi"iɠi)pi’<ei’iBii - vr’ • M Thus, it...suffices to show that, uniformly in n + l^i^i’ _<n + M, .1/2 (JK)X^ cov(P1(ei),ril(01)| B1 ) -O .Hence, it suffices to show (JK)1/2V(P.i(01)| B1 ...0 as n->■ « , uniformly in n + 1 £ i £ n + M. Now, letting x1 = 2x/c 1.10 (1.34) I 2f(n)(Ö)dO V(Pi(01)| B1 ) = J[Pi(0) - jpi(9 ,)f(n)(0,)d8
Statistical mechanics of monatomic liquids
NASA Astrophysics Data System (ADS)
Wallace, Duane C.
1997-10-01
Two key experimental properties of elemental liquids, together with an analysis of the condensed-system potential-energy surface, lead us logically to the dynamical theory of monatomic liquids. Experimentally, the ion motional specific heat is approximately 3Nk for N ions, implying the normal modes of motion are approximately 3N independent harmonic oscillators. This implies the potential surface contains nearly harmonic valleys. The equilibrium configuration at the bottom of each valley is a ``structure.'' Structures are crystalline or amorphous, and amorphous structures can have a remnant of local crystal symmetry, or can be random. The random structures are by far the most numerous, and hence dominate the statistical mechanics of the liquid state, and their macroscopic properties are uniform over the structure class, for large-N systems. The Hamiltonian for any structural valley is the static structure potential, a sum of harmonic normal modes, and an anharmonic correction. Again from experiment, the constant-density entropy of melting contains a universal disordering contribution of NkΔ, suggesting the random structural valleys are of universal number wN, where lnw=Δ. Our experimental estimate for Δ is 0.80. In quasiharmonic approximation, the liquid theory for entropy agrees with experiment, for all currently analyzable experimental data at elevated temperatures, to within 1-2% of the total entropy. Further testable predictions of the theory are mentioned.
Deriving Color-Color Transformations for VRI Photometry
NASA Astrophysics Data System (ADS)
Taylor, B. J.; Joner, M. D.
2006-12-01
In this paper, transformations between Cousins R-I and other indices are considered. New transformations to Cousins V-R and Johnson V-K are derived, a published transformation involving T1-T2 on the Washington system is rederived, and the basis for a transformation involving b-y is considered. In addition, a statistically rigorous procedure for deriving such transformations is presented and discussed in detail. Highlights of the discussion include (1) the need for statistical analysis when least-squares relations are determined and interpreted, (2) the permitted forms and best forms for such relations, (3) the essential role played by accidental errors, (4) the decision process for selecting terms to appear in the relations, (5) the use of plots of residuals, (6) detection of influential data, (7) a protocol for assessing systematic effects from absorption features and other sources, (8) the reasons for avoiding extrapolation of the relations, (9) a protocol for ensuring uniformity in data used to determine the relations, and (10) the derivation and testing of the accidental errors of those data. To put the last of these subjects in perspective, it is shown that rms errors for VRI photometry have been as small as 6 mmag for more than three decades and that standard errors for quantities derived from such photometry can be as small as 1 mmag or less.
Gaeta, M; Campanella, F; Gentile, L; Schifino, G M; Capasso, L; Bandera, F; Banfi, G; Arpesella, M; Ricci, C
2017-01-01
The circulatory diseases, in particular ischemic heart diseases and stroke, represent the main causes of death worldwide both in high income and in middle and low income countries. Our aim is to provide a comprehensive report to depict the circulatory disease mortality in Europe over the last 30 years and to address the sources of heterogeneity among different countries. Our study was performed using the WHO statistical information system - mortality database - and was restricted to the 28 countries belonging to the European Union (EU-28). We evaluated gender and age time series of all circulatory disease mortality, ischemic heart diseases, cerebrovascular diseases, pulmonary and other circulatory diseases and than we performed forecast for 2016. Mortality heterogeneity was evaluated by countries using the Cochrane Q statistic and the I-squared index. Between 1985 and 2011 SDR for deaths attributable to all circulatory system diseases decreased from 440.9 to 212.0 x 100,000 in EU-28 and a clear uniform reduction was observed. Heterogeneity among countries was found to be consistent, therefore different analysis were carried out considering geographical area. We forecast a reduction in European cardiovascular mortality. Heterogeneity among countries could only in part be explained by both geographical and health expenditure factors.
A comparison of methods for assessing power output in non-uniform onshore wind farms
Staid, Andrea; VerHulst, Claire; Guikema, Seth D.
2017-10-02
Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshoremore » farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. In conclusion, we show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.« less
A comparison of methods for assessing power output in non-uniform onshore wind farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staid, Andrea; VerHulst, Claire; Guikema, Seth D.
Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshoremore » farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. In conclusion, we show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.« less
Brain age and other bodily 'ages': implications for neuropsychiatry.
Cole, James H; Marioni, Riccardo E; Harris, Sarah E; Deary, Ian J
2018-06-11
As our brains age, we tend to experience cognitive decline and are at greater risk of neurodegenerative disease and dementia. Symptoms of chronic neuropsychiatric diseases are also exacerbated during ageing. However, the ageing process does not affect people uniformly; nor, in fact, does the ageing process appear to be uniform even within an individual. Here, we outline recent neuroimaging research into brain ageing and the use of other bodily ageing biomarkers, including telomere length, the epigenetic clock, and grip strength. Some of these techniques, using statistical approaches, have the ability to predict chronological age in healthy people. Moreover, they are now being applied to neurological and psychiatric disease groups to provide insights into how these diseases interact with the ageing process and to deliver individualised predictions about future brain and body health. We discuss the importance of integrating different types of biological measurements, from both the brain and the rest of the body, to build more comprehensive models of the biological ageing process. Finally, we propose seven steps for the field of brain-ageing research to take in coming years. This will help us reach the long-term goal of developing clinically applicable statistical models of biological processes to measure, track and predict brain and body health in ageing and disease.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.
Deterministic Impulsive Vacuum Foundations for Quantum-Mechanical Wavefunctions
NASA Astrophysics Data System (ADS)
Valentine, John S.
2013-09-01
By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model's deterministic foundations.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Uniform functional structure across spatial scales in an intertidal benthic assemblage.
Barnes, R S K; Hamylton, Sarah
2015-05-01
To investigate the causes of the remarkable similarity of emergent assemblage properties that has been demonstrated across disparate intertidal seagrass sites and assemblages, this study examined whether their emergent functional-group metrics are scale related by testing the null hypothesis that functional diversity and the suite of dominant functional groups in seagrass-associated macrofauna are robust structural features of such assemblages and do not vary spatially across nested scales within a 0.4 ha area. This was carried out via a lattice of 64 spatially referenced stations. Although densities of individual components were patchily dispersed across the locality, rank orders of importance of the 14 functional groups present, their overall functional diversity and evenness, and the proportions of the total individuals contained within each showed, in contrast, statistically significant spatial uniformity, even at areal scales <2 m(2). Analysis of the proportional importance of the functional groups in their geospatial context also revealed weaker than expected levels of spatial autocorrelation, and then only at the smaller scales and amongst the most dominant groups, and only a small number of negative correlations occurred between the proportional importances of the individual groups. In effect, such patterning was a surface veneer overlying remarkable stability of assemblage functional composition across all spatial scales. Although assemblage species composition is known to be homogeneous in some soft-sediment marine systems over equivalent scales, this combination of patchy individual components yet basically constant functional-group structure seems as yet unreported. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Aruga, Yasuhiro; Kozuka, Masaya; Takaki, Yasuo; Sato, Tatsuo
2014-12-01
Temporal changes in the number density, size distribution, and chemical composition of clusters formed during natural aging at room temperature and pre-aging at 363 K (90 °C) in an Al-0.62Mg-0.93Si (mass pct) alloy were evaluated using atom probe tomography. More than 10 million atoms were examined in the cluster analysis, in which about 1000 clusters were obtained for each material after various aging treatments. The statistically proven records show that both number density and the average radius of clusters in pre-aged materials are larger than in naturally aged materials. It was revealed that the fraction of clusters with a low Mg/Si ratio after natural aging for a short time is higher than with other aging treatments, regardless of cluster size. This indicates that Si-rich clusters form more easily after short-period natural aging, and that Mg atoms can diffuse into the clusters or possibly form another type of Mg-Si cluster after prolonged natural aging. The formation of large clusters with a uniform Mg/Si ratio is encouraged by pre-aging. It can be concluded that an increase of small clusters with various Mg/Si ratios does not promote the bake-hardening (BH) response, whereas large clusters with a uniform Mg/Si ratio play an important role in hardening during the BH treatment at 443 K (170 °C).
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
NASA Astrophysics Data System (ADS)
Chen, Ho-Wen; Chen, Wei-Yea; Chang, Cheng-Nan; Chuang, Yen-Hsun; Lin, Yu-Hao
2016-06-01
The recently developed Central Taiwan Science Park (CTSP) in central Taiwan is home to an optoelectronic and semiconductor industrial cluster. Therefore, exploring the elemental compositions and size distributions of airborne particles emitted from the CTSP would help to prevent pollution. This study analyzed size-fractionated metal-rich particle samples collected in upwind and downwind areas of CTSP during Jan. and Oct. 2013 by using micro-orifice uniform deposited impactor (MOUDI). Correlation analysis, hierarchical cluster analysis and particle mass-size distribution analysis are performed to identify the source of metal-rich particle near the CTSP. Analyses of elemental compositions and particle size distributions emitted from the CTSP revealed that the CTSP emits some metals (V, As, In Ga, Cd and Cu) in the ultrafine particles (< 1 μm). The statistical analysis combines with the particle mass-size distribution analysis could provide useful source identification information. In airborne particles with the size of 0.32 μm, Ga could be a useful pollution index for optoelectronic and semiconductor emission in the CTSP. Meanwhile, the ratios of As/Ga concentration at the particle size of 0.32 μm demonstrates that humans near the CTSP would be potentially exposed to GaAs ultrafine particles. That is, metals such as Ga and As and other metals that are not regulated in Taiwan are potentially harmful to human health.
Modeling of the Geosocial Process using GIS «Disasters»
NASA Astrophysics Data System (ADS)
Vikulina, Marina; Turchaninova, Alla; Dolgaya, Anna; Vikulin, Alexandr; Petrova, Elena
2016-04-01
The natural and social disasters generate a huge stress in the world community. Most researches searching for the relationships between different catastrophic events consider the limited sets of disasters and do not take into account their size. This fact puts to doubt the completeness and statistical significance of such approach. Thus the next indispensible step is to overpass from narrow subject framework researches of disasters to more complex researches. In order to study the relationships between the Nature and the Society a database of natural disasters and dreadful social events occurred during the last XXXVI (36) centuries of human history weighted by the magnitude was created and became a core of the GIS «Disasters» (ArcGIS 10.0). By the moment the database includes more than 2500 most socially significant ("strong") catastrophic natural (earthquakes, fires, floods, droughts, climatic anomalies, other natural disasters) as well as social (wars, revolts, genocide, epidemics, fires caused by the human being, other social disasters) events. So far, each event is presented as a point feature located in the center of the struck region in the World Map. If the event affects several countries, it is placed in the approximate center of the affected area. Every event refers to the country or group of countries which are located in a zone of its influence now. The grade J (I, II and III) is specified for each event according to the disaster force assessment scale developed by the authors. The GIS with such a detailed database of disastrous events weighted by the magnitude over a long period of time is compiled for the first time and creates fairly complete and statistically representative basis for studies of the distribution of natural and social disasters and their relationship. By the moment the statistical analysis of the database performed both for each aggregate (natural disasters and catastrophic social phenomena), and for particular statistically representative types of events led to the following conclusions: natural disasters and dreadful social events have appeared to be closely related to each other despite their apparently different nature. The numbers of events of different magnitude are distributed by logarithmic law: the bigger the event, the less likely it happens. For each type of events and each aggregate the existence of periodicities with periods of 280 ± 60 years was established. The identified properties of cyclicity, grouping and interaction create a basis for modeling essentially unified Geosocial Process at a high enough statistical level and prove the existence of the uniform planetary Geosocial Process. The evidence of interaction between "lifeless" Nature and Society is fundamental and provided a new forecasting approach of demographic crises taking into account both natural disasters and social phenomena. The idea of the interaction of Nature and Society through the disasters «exchange» as a uniform planetary Geosocial Process is an essentially new statement introduced for the first time.
Qualification of security printing features
NASA Astrophysics Data System (ADS)
Simske, Steven J.; Aronoff, Jason S.; Arnabat, Jordi
2006-02-01
This paper describes the statistical and hardware processes involved in qualifying two related printing features for their deployment in product (e.g. document and package) security. The first is a multi-colored tiling feature that can also be combined with microtext to provide additional forms of security protection. The color information is authenticated automatically with a variety of handheld, desktop and production scanners. The microtext is authenticated either following magnification or manually by a field inspector. The second security feature can also be tile-based. It involves the use of two inks that provide the same visual color, but differ in their transparency to infrared (IR) wavelengths. One of the inks is effectively transparent to IR wavelengths, allowing emitted IR light to pass through. The other ink is effectively opaque to IR wavelengths. These inks allow the printing of a seemingly uniform, or spot, color over a (truly) uniform IR emitting ink layer. The combination converts a uniform covert ink and a spot color to a variable data region capable of encoding identification sequences with high density. Also, it allows the extension of variable data printing for security to ostensibly static printed regions, affording greater security protection while meeting branding and marketing specifications.
Seasonal variation of acute toxoplasmic lymphadenopathy in the United States.
Contopoulos-Ioannidis, D; Talucod, J; Maldonado, Y; Montoya, J G
2015-07-01
We describe the seasonal variation of acute toxoplasmosis in the United States. Acute toxoplasmic lymphadenopathy (ATL) can be a surrogate of acute toxoplasmosis in patients in whom the date of onset of lymphadenopathy matches the window of acute infection predicted by serological tests performed at a reference laboratory. We used the electronic database of the Palo Alto Medical Foundation Toxoplasma Serology Laboratory (PAMF-TSL) (1997-2011) to identify cases of ATL. We tested the uniformity of distribution of ATL cases per month, across the 12 calendar months, using circular statistics uniformity tests. We identified 112 consecutive cases of ATL. The distribution of cases was not uniform across the 12 calendar months. We observed the highest peak of cases in December and a second highest peak in September. Similar months were identified in patients with acute toxoplasmosis in rural areas in France. The results were similar when we performed weighted analyses, weighting for the total number of Toxoplasma gondii IgG tests performed per month in the PAMF-TSL laboratory. This is the largest study to date of the seasonal variation of ATL in the United States. Physicians should advise high-risk individuals to avoid risk factors associated with T. gondii infections especially around those months.
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
Rasch analysis of the patient-rated wrist evaluation questionnaire.
Esakki, Saravanan; MacDermid, Joy C; Vincent, Joshua I; Packham, Tara L; Walton, David; Grewal, Ruby
2018-01-01
The Patient-Rated Wrist Evaluation (PRWE) was developed as a wrist joint specific measure of pain and disability and evidence of sound validity has been accumulated through classical psychometric methods. Rasch analysis (RA) has been endorsed as a newer method for analyzing the clinical measurement properties of self-report outcome measures. The purpose of this study was to evaluate the PRWE using Rasch modeling. We employed the Rasch model to assess overall fit, response scaling, individual item fit, differential item functioning (DIF), local dependency, unidimensionality and person separation index (PSI). A convenience sample of 382 patients with distal radius fracture was recruited from the hand and upper limb clinic at large academic healthcare organization, London, Ontario, Canada, 6-month post-injury scores of the PRWE was used. RA was conducted on the 3 subscales (pain, specific activities, and usual activities) of the PRWE separately. The pain subscale adequately fit the Rasch model when item 4 "Pain - When it is at its worst" was deleted to eliminate non-uniform DIF by age group, and item 5 "How often do you have pain" was rescored by collapsing into 8 intervals to eliminate disordered thresholds. Uniform DIF for "Use my affected hand to push up from the chair" (by work status) and "Use bathroom tissue with my affected hand" (by injured hand) was addressed by splitting the items for analysis. After background rescoring of 2 items in pain subscale, 2 items in specific activities and 3 items in usual activities, all three subscales of the PRWE were well targeted and had high reliability (PSI = 0.86). These changes provided a unidimensional, interval-level scaled measure. Like a previous analysis of the Patient-Rated Wrist and Hand Evaluation, this study found the PRWE could be fit to the Rasch model with rescoring of multiple items. However, the modifications required to achieve fit were not the same across studies, our fit statistics also suggested one of the pain items should be deleted. This study adds to the pool of evidence supporting the PRWE, but cannot confidently provide a Rasch-based scoring algorithm.
A Uniform Identity: Schoolgirl Snapshots and the Spoken Visual
ERIC Educational Resources Information Center
Spencer, Stephanie
2007-01-01
This article discusses the possibility for expanding our understanding of the visual to include the "spoken visual" within oral history analysis. It suggests that adding a further reading, that of the visualized body, to the voice-centred relational method we can consider the meaning of the uniformed body for the individual. It uses as a…
ERIC Educational Resources Information Center
Rowland, D. R.
2007-01-01
The physical analysis of a uniformly accelerating point charge provides a rich problem to explore in advanced courses in electrodynamics and relativity since it brings together fundamental concepts in relation to electromagnetic radiation, Einstein's equivalence principle and the inertial mass of field energy in ways that reveal subtleties in each…
Sampling and counting genome rearrangement scenarios
2015-01-01
Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124
Second Law based definition of passivity/activity of devices
NASA Astrophysics Data System (ADS)
Sundqvist, Kyle M.; Ferry, David K.; Kish, Laszlo B.
2017-10-01
Recently, our efforts to clarify the old question, if a memristor is a passive or active device [1], triggered debates between engineers, who have had advanced definitions of passivity/activity of devices, and physicists with significantly different views about this seemingly simple question. This debate triggered our efforts to test the well-known engineering concepts about passivity/activity in a deeper way, challenging them by statistical physics. It is shown that the advanced engineering definition of passivity/activity of devices is self-contradictory when a thermodynamical system executing Johnson-Nyquist noise is present. A new, statistical physical, self-consistent definition based on the Second Law of Thermodynamics is introduced. It is also shown that, in a system with uniform temperature distribution, any rectifier circuitry that can rectify thermal noise must contain an active circuit element, according to both the engineering and statistical physical definitions.
Design Techniques for Uniform-DFT, Linear Phase Filter Banks
NASA Technical Reports Server (NTRS)
Sun, Honglin; DeLeon, Phillip
1999-01-01
Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2014-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2013-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using NCEP/NCAR re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally-refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary
2018-04-29
Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Koteswararao, B.; Hazra, Binoy K.; Rout, Dibyata; Srinivasarao, P. V.; Srinath, S.; Panda, S. K.
2017-07-01
We have studied the structural and magnetic properties and electronic structure of the compound InCuPO5 synthesized by a solid state reaction method. The structure of InCuPO5 comprises S = ½ uniform spin chains formed by corner-shared CuO4 units. Magnetic susceptibility (χ(T)) data show a broad maximum at about 65 K, a characteristic feature of one-dimensional (1D) magnetism. The χ(T) data are fitted to the coupled S = ½ Heisenberg antiferromagnetic (HAFM) uniform chain model that gives the intra-chain coupling (J/k B) between nearest-neighbor Cu2+ ions as -100 K and the ratio of inter-chain to intra-chain coupling (J‧/J) as about 0.07. The exchange couplings estimated from the magnetic data analysis are in good agreement with the values computed from the electronic structure calculations based on the density functional theory + Hubbard U (DFT + U) approach. The combination of theoretical and experimental analysis confirms that InCuPO5 is a candidate material for weakly coupled S = ½ uniform chains. A detailed theoretical analysis of the electronic structure further reveals that the system is insulating with a gap of 2.4 eV and a local moment of 0.70 µ B/Cu.
Spatial variability of turbulent fluxes in the roughness sublayer of an even-aged pine forest
Katul, G.; Hsieh, C.-I.; Bowling, D.; Clark, K.; Shurpali, N.; Turnipseed, A.; Albertson, J.; Tu, K.; Hollinger, D.; Evans, B. M.; Offerle, B.; Anderson, D.; Ellsworth, D.; Vogel, C.; Oren, R.
1999-01-01
The spatial variability of turbulent flow statistics in the roughness sublayer (RSL) of a uniform even-aged 14 m (= h) tall loblolly pine forest was investigated experimentally. Using seven existing walkup towers at this stand, high frequency velocity, temperature, water vapour and carbon dioxide concentrations were measured at 15.5 m above the ground surface from October 6 to 10 in 1997. These seven towers were separated by at least 100 m from each other. The objective of this study was to examine whether single tower turbulence statistics measurements represent the flow properties of RSL turbulence above a uniform even-aged managed loblolly pine forest as a best-case scenario for natural forested ecosystems. From the intensive space-time series measurements, it was demonstrated that standard deviations of longitudinal and vertical velocities (??(u), ??(w)) and temperature (??(T)) are more planar homogeneous than their vertical flux of momentum (u(*)2) and sensible heat (H) counterparts. Also, the measured H is more horizontally homogeneous when compared to fluxes of other scalar entities such as CO2 and water vapour. While the spatial variability in fluxes was significant (> 15%), this unique data set confirmed that single tower measurements represent the 'canonical' structure of single-point RSL turbulence statistics, especially flux-variance relationships. Implications to extending the 'moving-equilibrium' hypothesis for RSL flows are discussed. The spatial variability in all RSL flow variables was not constant in time and varied strongly with spatially averaged friction velocity u(*), especially when u(*) was small. It is shown that flow properties derived from two-point temporal statistics such as correlation functions are more sensitive to local variability in leaf area density when compared to single point flow statistics. Specifically, that the local relationship between the reciprocal of the vertical velocity integral time scale (I(w)) and the arrival frequency of organized structures (u??/h) predicted from a mixing-layer theory exhibited dependence on the local leaf area index. The broader implications of these findings to the measurement and modelling of RSL flows are also discussed.
NASA Astrophysics Data System (ADS)
Ferchichi, Mohsen
This study is an experimental investigation consisting of two parts. In the first part, the fine structure of uniformly sheared turbulence was investigated within the framework of Kolmogorov's (1941) similarity hypotheses. The second part, consisted of the study of the scalar mixing in uniformly sheared turbulence with an imposed mean scalar gradient, with the emphasis on measurements relevant to the probability density function formulation and on scalar derivative statistics. The velocity fine structure was invoked from statistics of the streamwise and transverse derivatives of the streamwise velocity as well as velocity differences and structure functions, measured with hot wire anemometry for turbulence Reynolds numbers, Relambda, in the range between 140 and 660. The streamwise derivative skewness and flatness agreed with previously reported results in that they increased with increasing Relambda with the flatness increasing at a higher rate. The skewness of the transverse derivative decreased with increasing Relambda, and the flatness of this derivative increased with Relambda but a lower rate than the streamwise derivative flatness. The high order (up to sixth) transverse structure functions of the streamwise velocity showed the same trends as the corresponding streamwise structure functions. In the second pan of tins experimental study, an army of heated ribbons was introduced into the flow to produce a constant mean temperature gradient, such that the temperature acted as a passive scalar. The Re lambda in this study varied from 184 to 253. Cold wire thermometry and hot wire anemometry were used for simultaneous measurements of temperature and velocity. The scalar pdf was found to be nearly Gaussian. Various tests of joint statistics of the scalar and its rate of destruction revealed that the scalar dissipation rate was essentially independent of the scalar value. The measured joint statistics of the scalar and the velocity suggested that they were nearly jointly normal and that the normalized conditioned expectations varied linearly with the scalar with slopes corresponding to the scalar-velocity correlation coefficients. Finally, the measured streamwise and transverse scalar derivatives and differences revealed that the scalar fine structure was intermittent not only in the dissipative range, but in the inertial range as well.
Uniformity of LED light illumination in application to direct imaging lithography
NASA Astrophysics Data System (ADS)
Huang, Ting-Ming; Chang, Shenq-Tsong; Tsay, Ho-Lin; Hsu, Ming-Ying; Chen, Fong-Zhi
2016-09-01
Direct imaging has widely applied in lithography for a long time because of its simplicity and easy-maintenance. Although this method has limitation of lithography resolution, it is still adopted in industries. Uniformity of UV irradiance for a designed area is an important requirement. While mercury lamps were used as the light source in the early stage, LEDs have drawn a lot of attention for consideration from several aspects. Although LED has better and better performance, arrays of LEDs are required to obtain desired irradiance because of limitation of brightness for a single LED. Several effects are considered that affect the uniformity of UV irradiance such as alignment of optics, temperature of each LED, performance of each LED due to production uniformity, and pointing of LED module. Effects of these factors are considered to study the uniformity of LED Light Illumination. Numerical analysis is performed by assuming a serious of control factors to have a better understanding of each factor.
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Taylor, Deborah B.
1987-01-01
An explicit solution of the spectral radiance leaving an arbitrary point on the wall of a spherical cavity with diffuse reflectivity is obtained. The solution is applicable to spheres with an arbitrary number of openings of any size and shape, an arbitrary number of light sources with possible non-diffuse characteristics, a non-uniform sphere wall temperature distribution, non-uniform and non-diffuse sphere wall emissivity and non-uniform but diffuse sphere wall spectral reflectivity. A general measurement equation describing the output of a sensor with a given field of view, angular and spectral response measuring the sphere output is obtained. The results are applied to the Earth Radiation Budget Experiment (ERBE) integrating sphere. The sphere wall radiance uniformity, loading effects and non-uniform wall temperature effects are investigated. It is shown that using appropriate interpretation and processing, a high-accuracy short-wave calibration of the ERBE sensors can be achieved.
Effect of aerated concrete blockwork joints on the heat transfer performance uniformity
NASA Astrophysics Data System (ADS)
Pukhkal, Viktor; Murgul, Vera
2018-03-01
Analysis of data on the effect of joints of the aerated concrete blocks on the heat transfer uniformity of exterior walls was carried out. It was concluded, that the values of the heat transfer performance uniformity factor in the literature sources were obtained for the regular fragment of a wall construction by approximate addition of thermal conductivities. Heat flow patterns for the aerated concrete exterior walls amid different values of the thermal conductivity factors and design ambient air temperature of -26 °C were calculated with the use of "ELCUT" software for modelling of thermal patterns by finite element method. There were defined the values for the heat transfer performance uniformity factor, reduced total thermal resistance and heat-flux density for the exterior walls. The calculated values of the heat transfer performance uniformity factors, as a function of the coefficient of thermal conductivity of aerated concrete blocks, differ from the known data by a more rigorous thermal and physical substantiation.
The uniformity study of non-oxide thin film at device level using electron energy loss spectroscopy
NASA Astrophysics Data System (ADS)
Li, Zhi-Peng; Zheng, Yuankai; Li, Shaoping; Wang, Haifeng
2018-05-01
Electron energy loss spectroscopy (EELS) has been widely used as a chemical analysis technique to characterize materials chemical properties, such as element valence states, atoms/ions bonding environment. This study provides a new method to characterize physical properties (i.e., film uniformity, grain orientations) of non-oxide thin films in the magnetic device by using EELS microanalysis on scanning transmission electron microscope. This method is based on analyzing white line ratio of spectra and related extended energy loss fine structures so as to correlate it with thin film uniformity. This new approach can provide an effective and sensitive method to monitor/characterize thin film quality (i.e., uniformity) at atomic level for thin film development, which is especially useful for examining ultra-thin films (i.e., several nanometers) or embedded films in devices for industry applications. More importantly, this technique enables development of quantitative characterization of thin film uniformity and it would be a remarkably useful technique for examining various types of devices for industrial applications.
Experimental Research on Optimizing Inlet Airflow of Wet Cooling Towers under Crosswind Conditions
NASA Astrophysics Data System (ADS)
Chen, You Liang; Shi, Yong Feng; Hao, Jian Gang; Chang, Hao; Sun, Feng Zhong
2018-01-01
A new approach of installing air deflectors around tower inlet circumferentially was proposed to optimize the inlet airflow and reduce the adverse effect of crosswinds on the thermal performance of natural draft wet cooling towers (NDWCT). And inlet airflow uniformity coefficient was defined to analyze the uniformity of circumferential inlet airflow quantitatively. Then the effect of air deflectors on the NDWCT performance was investigated experimentally. By contrast between inlet air flow rate and cooling efficiency, it has been found that crosswinds not only decrease the inlet air flow rate, but also reduce the uniformity of inlet airflow, which reduce NDWCT performance jointly. After installing air deflectors, the inlet air flow rate and uniformity coefficient increase, the uniformity of heat and mass transfer increases correspondingly, which improve the cooling performance. In addition, analysis on Lewis factor demonstrates that the inlet airflow optimization has more enhancement of heat transfer than mass transfer, but leads to more water evaporation loss.
Third-Party Protest Regime and GAO Protest Statistics: DoD vs. Other Federal Agencies
2009-04-01
collectively mark a critical and fundamental shift in the procurement system. This shift results in the divergence of government contract law from...private contract law .17 The discretion and deference afforded executive agencies by the judiciary in procurement decisions no longer resembled...the district courts’ Scanwell jurisdiction on the basis that it would promote the development of a more uniform body of contract law while increasing
Deviations from uniform power law scaling in nonstationary time series
NASA Technical Reports Server (NTRS)
Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.
1997-01-01
A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.
Plane hydroelastic beam vibrations due to uniformly moving one axle vehicle
NASA Astrophysics Data System (ADS)
Fleischer, D.; Park, S.-K.
2004-06-01
The hydroelastic vibrations of a beam with rectangular cross-section is analyzed under the effect of an uniformly moving single axle vehicle using modal analysis and two-dimensional potential flow theory of the fluid neglecting the effect of surface waves aside the beam. For the special case of homogeneous beam resting on the surface of a water filled prismatic basin, the normal modes are determined considering surface waves in beam direction under the condition of compensating the volume of the enclosed fluid. The way to determine the vertical acceleration of the single axle vehicle is shown, which governs the response of the system. As analysis results the course of wheel load, the surface waves along the beam and the flow velocity distribution of the fluid is demonstrated for a continuous floating bridge under the passage of a rolling mass moving with uniform speed.
Particle circulation and solids transport in large bubbling fluidized beds. Progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Homsy, G.M.
1982-04-01
We have undertaken a theoretical study of the possibility of the formation of plumes or channeling when coal particles volatilize upon introduction to a fluidized bed, Fitzgerald (1980). We have completed the analysis of the basic state of uniform flow and are currently completing a stability analysis. We have modified the continuum equations of fluidization, Homsy et al. (1980), to include the source of gas due to volatilization, which we assume to be uniformly distributed spatially. Simplifying these equations and solving leads to the prediction of a basic state analogous to the state of uniform fluidization found when no sourcemore » is present within the medium. We are currently completing a stability analysis of this basic state which will give the critical volatilization rate above which the above simple basic state is unstable. Because of the experimental evidence of Jewett and Lawless (1981), who observed regularly spaced plume-like instabilities upon drying a bed of saturated silica gel, we are considering two-dimensional periodic disturbances. The analysis is similar to that given by Homsy, et al. (1980) and Medlin et al. (1974). We hope to determine the stability limits for this system shortly.« less
An, Ran; Massa, Katherine
2014-01-01
AC Faradaic reactions have been reported as a mechanism inducing non-ideal phenomena such as flow reversal and cell deformation in electrokinetic microfluidic systems. Prior published work described experiments in parallel electrode arrays below the electrode charging frequency (fc), the frequency for electrical double layer charging at the electrode. However, 2D spatially non-uniform AC electric fields are required for applications such as in plane AC electroosmosis, AC electrothermal pumps, and dielectrophoresis. Many microscale experimental applications utilize AC frequencies around or above fc. In this work, a pH sensitive fluorescein sodium salt dye was used to detect [H+] as an indicator of Faradaic reactions in aqueous solutions within non-uniform AC electric fields. Comparison experiments with (a) parallel (2D uniform fields) electrodes and (b) organic media were employed to deduce the electrode charging mechanism at 5 kHz (1.5fc). Time dependency analysis illustrated that Faradaic reactions exist above the theoretically predicted electrode charging frequency. Spatial analysis showed [H+] varied spatially due to electric field non-uniformities and local pH changed at length scales greater than 50 μm away from the electrode surface. Thus, non-uniform AC fields yielded spatially varied pH gradients as a direct consequence of ion path length differences while uniform fields did not yield pH gradients; the latter is consistent with prior published data. Frequency dependence was examined from 5 kHz to 12 kHz at 5.5 Vpp potential, and voltage dependency was explored from 3.5 to 7.5 Vpp at 5 kHz. Results suggest that Faradaic reactions can still proceed within electrochemical systems in the absence of well-established electrical double layers. This work also illustrates that in microfluidic systems, spatial medium variations must be considered as a function of experiment time, initial medium conditions, electric signal potential, frequency, and spatial position. PMID:25553200
Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John
2002-05-20
We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.
Histological Validity and Clinical Evidence for Use of Fractional Lasers for Acne Scars
Sardana, Kabir; Garg, Vijay K; Arora, Pooja; Khurana, Nita
2012-01-01
Though fractional lasers are widely used for acne scars, very little clinical or histological data based on the objective clinical assessment or the depth of penetration of lasers on in vivo facial tissue are available. The depth probably is the most important aspect that predicts the improvement in acne scars but the studies on histology have little uniformity in terms of substrate (tissue) used, processing and stains used. The variability of the laser setting (dose, pulses and density) makes comparison of the studies difficult. It is easier to compare the end results, histological depth and clinical results. We analysed all the published clinical and histological studies on fractional lasers in acne scars and analysed the data, both clinical and histological, by statistical software to decipher their significance. On statistical analysis, the depth was found to be variable with the 1550-nm lasers achieving a depth of 679 μm versus 10,600 nm (895 μm) and 2940 nm (837 μm) lasers. The mean depth of penetration (in μm) in relation to the energy used, in millijoules (mj), varies depending on the laser studied. This was statistically found to be 12.9–28.5 for Er:glass, 3–54.38 for Er:YAG and 6.28–53.66 for CO2. The subjective clinical improvement was a modest 46%. The lack of objective evaluation of clinical improvement and scar-specific assessment with the lack of appropriate in vivo studies is a case for combining conventional modalities like subcision, punch excision and needling with fractional lasers to achieve optimal results. PMID:23060702
Apparatus and method for controlling plating uniformity
Hachman Jr., John T.; Kelly, James J.; West, Alan C.
2004-10-12
The use of an insulating shield for improving the current distribution in an electrochemical plating bath is disclosed. Numerical analysis is used to evaluate the influence of shield shape and position on plating uniformity. Simulation results are compared to experimental data for nickel deposition from a nickel--sulfamate bath. The shield is shown to improve the average current density at a plating surface.
DOT National Transportation Integrated Search
2015-07-01
This report documents policy considerations for Response, Emergency Staging and Communications, Uniform Management, and Evacuation (R.E.S.C.U.M.E). R.E.S.C.U.M.E. comprises a "bundle" of mobility applications that use existing and new connected vehic...
ERIC Educational Resources Information Center
Finch, Holmes
2011-01-01
Methods of uniform differential item functioning (DIF) detection have been extensively studied in the complete data case. However, less work has been done examining the performance of these methods when missing item responses are present. Research that has been done in this regard appears to indicate that treating missing item responses as…
Automatic control system for uniformly paving iron ore pellets
NASA Astrophysics Data System (ADS)
Wang, Bowen; Qian, Xiaolong
2014-05-01
In iron and steelmaking industry, iron ore pellet qualities are crucial to end-product properties, manufacturing costs and waste emissions. Uniform pellet pavements on the grate machine are a fundamental prerequisite to ensure even heat-transfer and pellet induration successively influences performance of the following metallurgical processes. This article presents an automatic control system for uniformly paving green pellets on the grate, via a mechanism mainly constituted of a mechanical linkage, a swinging belt, a conveyance belt and a grate. Mechanism analysis illustrates that uniform pellet pavements demand the frontend of the swinging belt oscillate at a constant angular velocity. Subsequently, kinetic models are formulated to relate oscillatory movements of the swinging belt's frontend to rotations of a crank link driven by a motor. On basis of kinetic analysis of the pellet feeding mechanism, a cubic B-spline model is built for numerically computing discrete frequencies to be modulated during a motor rotation. Subsequently, the pellet feeding control system is presented in terms of compositional hardware and software components, and their functional relationships. Finally, pellet feeding experiments are carried out to demonstrate that the control system is effective, reliable and superior to conventional methods.
Estimates Of The Orbiter RSI Thermal Protection System Thermal Reliability
NASA Technical Reports Server (NTRS)
Kolodziej, P.; Rasky, D. J.
2002-01-01
In support of the Space Shuttle Orbiter post-flight inspection, structure temperatures are recorded at selected positions on the windward, leeward, starboard and port surfaces. Statistical analysis of this flight data and a non-dimensional load interference (NDLI) method are used to estimate the thermal reliability at positions were reusable surface insulation (RSI) is installed. In this analysis, structure temperatures that exceed the design limit define the critical failure mode. At thirty-three positions the RSI thermal reliability is greater than 0.999999 for the missions studied. This is not the overall system level reliability of the thermal protection system installed on an Orbiter. The results from two Orbiters, OV-102 and OV-105, are in good agreement. The original RSI designs on the OV-102 Orbital Maneuvering System pods, which had low reliability, were significantly improved on OV-105. The NDLI method was also used to estimate thermal reliability from an assessment of TPS uncertainties that was completed shortly before the first Orbiter flight. Results fiom the flight data analysis and the pre-flight assessment agree at several positions near each other. The NDLI method is also effective for optimizing RSI designs to provide uniform thermal reliability on the acreage surface of reusable launch vehicles.
Design of ceramic components with the NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.
1990-01-01
The ceramics analysis and reliability evaluation of structures (CARES) computer program is described. The primary function of the code is to calculate the fast-fracture reliability or failure probability of macro-scopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. CARES uses results from MSC/NASTRAN or ANSYS finite-element analysis programs to evaluate how inherent surface and/or volume type flaws component reliability. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for a single or multiple failure modes by using a least-squares analysis or a maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-to-fit-tests, 90 percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan 90 percent confidence band values are also provided. Examples are provided to illustrate the various features of CARES.
In-line monitoring of pellet coating thickness growth by means of visual imaging.
Oman Kadunc, Nika; Sibanc, Rok; Dreu, Rok; Likar, Boštjan; Tomaževič, Dejan
2014-08-15
Coating thickness is the most important attribute of coated pharmaceutical pellets as it directly affects release profiles and stability of the drug. Quality control of the coating process of pharmaceutical pellets is thus of utmost importance for assuring the desired end product characteristics. A visual imaging technique is presented and examined as a process analytic technology (PAT) tool for noninvasive continuous in-line and real time monitoring of coating thickness of pharmaceutical pellets during the coating process. Images of pellets were acquired during the coating process through an observation window of a Wurster coating apparatus. Image analysis methods were developed for fast and accurate determination of pellets' coating thickness during a coating process. The accuracy of the results for pellet coating thickness growth obtained in real time was evaluated through comparison with an off-line reference method and a good agreement was found. Information about the inter-pellet coating uniformity was gained from further statistical analysis of the measured pellet size distributions. Accuracy and performance analysis of the proposed method showed that visual imaging is feasible as a PAT tool for in-line and real time monitoring of the coating process of pharmaceutical pellets. Copyright © 2014 Elsevier B.V. All rights reserved.
Reload of an industrial cylindrical cobalt source rack
NASA Astrophysics Data System (ADS)
Gharbi, F.; Kadri, O.; Trabelsi, A.
2006-10-01
This work presents a Monte Carlo study of the cylindrical cobalt source rack geometry of the Tunisian gamma irradiation facility, using the GEANT code developed at CERN. The study investigates the question of the reload of the source rack. The studied configurations consist in housing four new cobalt pencils, two in the upper and two in the lower cylinder of the source rack. Global dose rate uniformity inside a "dummy" product for the case of routine and nonroutine irradiation, and as function of the product bulk density, was calculated for eight hypothetical configurations. The same calculation was also performed for both of the original and the ideal (but not practical) configurations. It was shown that hypothetical cases produced dose uniformity variations, according to product density, that were statistically no different than the original and the ideal configurations and that the reload procedure cannot improve the irradiation quality inside the facilities using cylindrical cobalt source racks.
Timing in a Variable Interval Procedure: Evidence for a Memory Singularity
Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn
2013-01-01
Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783
Facility optimization to improve activation rate distributions during IVNAA.
Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem
2013-05-01
Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body.
TU-FG-201-05: Varian MPC as a Statistical Process Control Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carver, A; Rowbottom, C
Purpose: Quality assurance in radiotherapy requires the measurement of various machine parameters to ensure they remain within permitted values over time. In Truebeam release 2.0 the Machine Performance Check (MPC) was released allowing beam output and machine axis movements to be assessed in a single test. We aim to evaluate the Varian Machine Performance Check (MPC) as a tool for Statistical Process Control (SPC). Methods: Varian’s MPC tool was used on three Truebeam and one EDGE linac for a period of approximately one year. MPC was commissioned against independent systems. After this period the data were reviewed to determine whethermore » or not the MPC was useful as a process control tool. Analyses on individual tests were analysed using Shewhart control plots, using Matlab for analysis. Principal component analysis was used to determine if a multivariate model was of any benefit in analysing the data. Results: Control charts were found to be useful to detect beam output changes, worn T-nuts and jaw calibration issues. Upper and lower control limits were defined at the 95% level. Multivariate SPC was performed using Principal Component Analysis. We found little evidence of clustering beyond that which might be naively expected such as beam uniformity and beam output. Whilst this makes multivariate analysis of little use it suggests that each test is giving independent information. Conclusion: The variety of independent parameters tested in MPC makes it a sensitive tool for routine machine QA. We have determined that using control charts in our QA programme would rapidly detect changes in machine performance. The use of control charts allows large quantities of tests to be performed on all linacs without visual inspection of all results. The use of control limits alerts users when data are inconsistent with previous measurements before they become out of specification. A. Carver has received a speaker’s honorarium from Varian.« less
NASA Technical Reports Server (NTRS)
Djorgovski, S. G.
1994-01-01
We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complex database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects of the SKICAT system, and of some of the scientific results achieved to date. We also developed a user-friendly package for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications and has produced real, published results.
Jerosch-Herold, Christina; Chester, Rachel; Shepstone, Lee; Vincent, Joshua I; MacDermid, Joy C
2018-02-01
The shoulder pain and disability index (SPADI) has been extensively evaluated for its psychometric properties using classical test theory (CTT). The purpose of this study was to evaluate its structural validity using Rasch model analysis. Responses to the SPADI from 1030 patients referred for physiotherapy with shoulder pain and enrolled in a prospective cohort study were available for Rasch model analysis. Overall fit, individual person and item fit, response format, dependence, unidimensionality, targeting, reliability and differential item functioning (DIF) were examined. The SPADI pain subscale initially demonstrated a misfit due to DIF by age and gender. After iterative analysis it showed good fit to the Rasch model with acceptable targeting and unidimensionality (overall fit Chi-square statistic 57.2, p = 0.1; mean item fit residual 0.19 (1.5) and mean person fit residual 0.44 (1.1); person separation index (PSI) of 0.83. The disability subscale however shows significant misfit due to uniform DIF even after iterative analyses were used to explore different solutions to the sources of misfit (overall fit (Chi-square statistic 57.2, p = 0.1); mean item fit residual 0.54 (1.26) and mean person fit residual 0.38 (1.0); PSI 0.84). Rasch Model analysis of the SPADI has identified some strengths and limitations not previously observed using CTT methods. The SPADI should be treated as two separate subscales. The SPADI is a widely used outcome measure in clinical practice and research; however, the scores derived from it must be interpreted with caution. The pain subscale fits the Rasch model expectations well. The disability subscale does not fit the Rasch model and its current format does not meet the criteria for true interval-level measurement required for use as a primary endpoint in clinical trials. Clinicians should therefore exercise caution when interpreting score changes on the disability subscale and attempt to compare their scores to age- and sex-stratified data.
Fiteni, Frédéric; Anota, Amélie; Westeel, Virginie; Bonnetain, Franck
2016-02-18
Health-related quality of life (HRQoL) is recognized as a component endpoint for cancer therapy approvals. The aim of this review was to evaluate the methodology of HRQoL analysis and reporting in phase III clinical trials of first-line chemotherapy in advanced non-small cell lung cancers (NSCLC). A search in MEDLINE databases identified phase III clinical trials in first-line chemotherapy for advanced NSCLC, published between January 2008 to December 2014. Two authors independently extracted information using predefined data abstraction forms. A total of 55 phase III advanced NSCLC trials were identified. HRQoL was declared as an endpoint in 27 studies (49%). Among these 27 studies, The EORTC questionnaire Quality of Life Questionnaire C30 was used in 13 (48%) of the studies and The Functional Assessment of Cancer Therapy-General was used in 12 (44%) trials. The targeted dimensions of HRQoL, the minimal clinically important difference and the statistical approaches for dealing with missing data were clearly specified in 13 (48.1%), 9 (33.3%) and 5 (18.5%) studies, respectively. The most frequent statistical methods for HRQoL analysis were: the mean change from baseline (33.3%), the linear mixed model for repeated measures (22.2%) and time to HRQoL score deterioration (18.5%). For each targeted dimension, the results for each group, the estimated effect size and its precision were clearly reported in 4 studies (14.8%), not clearly reported in 11 studies (40.7%) and not reported at all in 12 studies (44.4%). This review demonstrated the weakness and the heterogeneity of the measurement, analysis, and reporting of HRQoL in phase III advanced NSCLC trials. Precise and uniform recommendations are needed to compare HRQoL results across publications and to provide understandable messages for patients and clinicians.
Washing and changing uniforms: is guidance being adhered to?
Potter, Yvonne Camilla; Justham, David
To allay public apprehension regarding the risk of nurses' uniforms transmitting healthcare-associated infections (HCAI), national and local guidelines have been issued to control use, laundry and storage. This paper aims to measure the knowledge of registered nurses (RNs) and healthcare assistants (HCAs) working within a rural NHS foundation Trust and their adherence to the local infection prevention and control (IPC) standard regarding uniforms through a Trust-wide audit. Stratified random sampling selected 597 nursing staff and 399 responded (67%) by completing a short questionnaire based on the local standard. Responses were coded and transferred to SPSS (v. 17) for analysis. The audit found that nursing staff generally adhere to the guidelines, changing their uniforms daily and immediately upon accidental soiling, and wearing plastic aprons where indicated. At home, staff normally machine-wash and then iron their uniforms at the hottest setting. Nevertheless, few observe the local direction to place their newly-laundered uniforms in protective covers. This paper recommends a re-audit to compare compliance rates with baseline figures and further research into the reasons why compliance is lacking to sanction interventions for improvement, such as providing relevant staff education and re-introducing appropriate changing facilities.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
Using entropy to cut complex time series
NASA Astrophysics Data System (ADS)
Mertens, David; Poncela Casasnovas, Julia; Spring, Bonnie; Amaral, L. A. N.
2013-03-01
Using techniques from statistical physics, physicists have modeled and analyzed human phenomena varying from academic citation rates to disease spreading to vehicular traffic jams. The last decade's explosion of digital information and the growing ubiquity of smartphones has led to a wealth of human self-reported data. This wealth of data comes at a cost, including non-uniform sampling and statistically significant but physically insignificant correlations. In this talk I present our work using entropy to identify stationary sub-sequences of self-reported human weight from a weight management web site. Our entropic approach-inspired by the infomap network community detection algorithm-is far less biased by rare fluctuations than more traditional time series segmentation techniques. Supported by the Howard Hughes Medical Institute
Healthy Worker Effect Phenomenon: Revisited with Emphasis on Statistical Methods – A Review
Chowdhury, Ritam; Shah, Divyang; Payal, Abhishek R.
2017-01-01
Known since 1885 but studied systematically only in the past four decades, the healthy worker effect (HWE) is a special form of selection bias common to occupational cohort studies. The phenomenon has been under debate for many years with respect to its impact, conceptual approach (confounding, selection bias, or both), and ways to resolve or account for its effect. The effect is not uniform across age groups, gender, race, and types of occupations and nor is it constant over time. Hence, assessing HWE and accounting for it in statistical analyses is complicated and requires sophisticated methods. Here, we review the HWE, factors affecting it, and methods developed so far to deal with it. PMID:29391741
In Situ Real-Time Radiographic Study of Thin Film Formation Inside Rotating Hollow Spheres
Braun, Tom; Walton, Christopher C.; Dawedeit, Christoph; ...
2016-02-03
The hollow spheres with uniform coatings on the inner surface have applications in optical devices, time- or site-controlled drug release, heat storage devices, and target fabrication for inertial confinement fusion experiments. The fabrication of uniform coatings, which is often critical for the application performance, requires precise understanding and control over the coating process and its parameters. We report on in situ real-time radiography experiments that provide critical spatiotemporal information about the distribution of fluids inside hollow spheres during uniaxial rotation. Furthermore, image analysis and computer fluid dynamics simulations were used to explore the effect of liquid viscosity and rotational velocitymore » on the film uniformity. The data were then used to demonstrate the fabrication of uniform sol–gel chemistry derived porous polymer films inside 2 mm inner diameter diamond shells.« less
In Situ Real-Time Radiographic Study of Thin Film Formation Inside Rotating Hollow Spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, Tom; Walton, Christopher C.; Dawedeit, Christoph
2016-02-03
Hollow spheres with uniform coatings on the inner surface have applications in optical devices, time- or site controlled drug release, heat storage devices, and target fabrication for inertial confinement fusion experiments. The fabrication of uniform coatings, which is often critical for the application performance, requires precise understanding and control over the coating process and its parameters. Here, we report on in-situ real-time radiography experiments that provide critical spatio-temporal information about the distribution of fluids inside hollow spheres during uniaxial rotation. Image analysis and computer fluid dynamics simulations were used to explore the effect of liquid viscosity and rotational velocity onmore » the film uniformity. The data were then used to demonstrate the fabrication of uniform sol-gel chemistry derived porous polymer films inside 2mm inner diameter diamond shells.« less