Sampling large random knots in a confined space
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
2015-06-01
of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data
NASA Astrophysics Data System (ADS)
Mobli, Mehdi
2015-07-01
The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...
2017-01-31
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
Some limit theorems for ratios of order statistics from uniform random variables.
Xu, Shou-Fang; Miao, Yu
2017-01-01
In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.
Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers
NASA Astrophysics Data System (ADS)
Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan
2018-04-01
The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1 × 2.1 × 2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D-MRI with high temporal and spatial resolution within short scan time for visualization of organ or tumor motion during free breathing. Further studies, e.g. the application of the method for radiotherapy planning are needed to investigate the clinical applicability and diagnostic value of the approach.
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.
Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh
2017-06-01
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
A new approach to evaluate gamma-ray measurements
NASA Technical Reports Server (NTRS)
Dejager, O. C.; Swanepoel, J. W. H.; Raubenheimer, B. C.; Vandervalt, D. J.
1985-01-01
Misunderstandings about the term random samples its implications may easily arise. Conditions under which the phases, obtained from arrival times, do not form a random sample and the dangers involved are discussed. Watson's U sup 2 test for uniformity is recommended for light curves with duty cycles larger than 10%. Under certain conditions, non-parametric density estimation may be used to determine estimates of the true light curve and its parameters.
Accelerated 1 H MRSI using randomly undersampled spiral-based k-space trajectories.
Chatnuntawech, Itthi; Gagoski, Borjan; Bilgic, Berkin; Cauley, Stephen F; Setsompop, Kawin; Adalsteinsson, Elfar
2014-07-30
To develop and evaluate the performance of an acquisition and reconstruction method for accelerated MR spectroscopic imaging (MRSI) through undersampling of spiral trajectories. A randomly undersampled spiral acquisition and sensitivity encoding (SENSE) with total variation (TV) regularization, random SENSE+TV, is developed and evaluated on single-slice numerical phantom, in vivo single-slice MRSI, and in vivo three-dimensional (3D)-MRSI at 3 Tesla. Random SENSE+TV was compared with five alternative methods for accelerated MRSI. For the in vivo single-slice MRSI, random SENSE+TV yields up to 2.7 and 2 times reduction in root-mean-square error (RMSE) of reconstructed N-acetyl aspartate (NAA), creatine, and choline maps, compared with the denoised fully sampled and uniformly undersampled SENSE+TV methods with the same acquisition time, respectively. For the in vivo 3D-MRSI, random SENSE+TV yields up to 1.6 times reduction in RMSE, compared with uniform SENSE+TV. Furthermore, by using random SENSE+TV, we have demonstrated on the in vivo single-slice and 3D-MRSI that acceleration factors of 4.5 and 4 are achievable with the same quality as the fully sampled data, as measured by RMSE of reconstructed NAA map, respectively. With the same scan time, random SENSE+TV yields lower RMSEs of metabolite maps than other methods evaluated. Random SENSE+TV achieves up to 4.5-fold acceleration with comparable data quality as the fully sampled acquisition. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.
Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.
Blutke, Andreas; Wanke, Rüdiger
2018-03-06
In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.
ERIC Educational Resources Information Center
Juhasz, Stephen; And Others
Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
A Bayesian Approach to the Paleomagnetic Conglomerate Test
NASA Astrophysics Data System (ADS)
Heslop, David; Roberts, Andrew P.
2018-02-01
The conglomerate test has served the paleomagnetic community for over 60 years as a means to detect remagnetizations. The test states that if a suite of clasts within a bed have uniformly random paleomagnetic directions, then the conglomerate cannot have experienced a pervasive event that remagnetized the clasts in the same direction. The current form of the conglomerate test is based on null hypothesis testing, which results in a binary "pass" (uniformly random directions) or "fail" (nonrandom directions) outcome. We have recast the conglomerate test in a Bayesian framework with the aim of providing more information concerning the level of support a given data set provides for a hypothesis of uniformly random paleomagnetic directions. Using this approach, we place the conglomerate test in a fully probabilistic framework that allows for inconclusive results when insufficient information is available to draw firm conclusions concerning the randomness or nonrandomness of directions. With our method, sample sets larger than those typically employed in paleomagnetism may be required to achieve strong support for a hypothesis of random directions. Given the potentially detrimental effect of unrecognized remagnetizations on paleomagnetic reconstructions, it is important to provide a means to draw statistically robust data-driven inferences. Our Bayesian analysis provides a means to do this for the conglomerate test.
Influence of tree spatial pattern and sample plot type and size on inventory
John-Pascall Berrill; Kevin L. O' Hara
2012-01-01
Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Wave Propagation inside Random Media
NASA Astrophysics Data System (ADS)
Cheng, Xiaojun
This thesis presents results of studies of wave scattering within and transmission through random and periodic systems. The main focus is on energy profiles inside quasi-1D and 1D random media. The connection between transport and the states of the medium is manifested in the equivalence of the dimensionless conductance, g, and the Thouless number which is the ratio of the average linewidth and spacing of energy levels. This equivalence and theories regarding the energy profiles inside random media are based on the assumption that LDOS is uniform throughout the samples. We have conducted microwave measurements of the longitudinal energy profiles within disordered samples contained in a copper tube supporting multiple waveguide channels with an antenna moving along a slit on the tube. These measurements allow us to determine the local density of states (LDOS) at a location which is the sum of energy from all incoming channels on both sides. For diffusive samples, the LDOS is uniform and the energy profile decays linearly as expected. However, for localized samples, we find that the LDOS drops sharply towards the middle of the sample and the energy profile does not follow the result of the local diffusion theory where the LDOS is assumed to be uniform. We analyze the field spectra into quasi-normal modes and found that the mode linewidth and the number of modes saturates as the sample length increases. Thus the Thouless number saturates while the dimensionless conductance g continues to fall with increasing length, indicating that the modes are localized near the boundaries. This is in contrast to the general believing that g and Thouless number follow the same scaling behavior. Previous measurements show that single parameter scaling (SPS) still holds in the same sample where the LDOS is suppressed te{shi2014microwave}. We explore the extension of SPS to the interior of the sample by analyzing statistics of the logrithm of the energy density ln W(x) and found that =-x/l where l is the transport mean free path. The result does not depend on the sample length, which is counterintuitive yet remarkably simple. More supprisingly, the linear fall-off of energy profile holds for totally disordered random 1D layered samples in simulations where the LDOS is uniform as well as for single mode random waveguide experiments and 1D nearly periodic samples where the LDOS is suppressed in the middle of the sample. The generalization of the transmission matrix to the interior of quasi-1D random samples, which is defined as the field matrix, and its eigenvalues statistics are also discussed. The maximum energy deposition at a location is not the intensity of the first transmission eigenchannel but the eigenvalue of the first energy density eigenchannels at that cross section, which can be much greater than the average value. The contrast, which is the ratio of the intensity at the focused point to the background intensity, in optimal focusing is determined by the participation number of the energy density eigenvalues and its inverse gives the variance of the energy density at that cross section in a single configuration. We have also studied topological states in photonic structures. We have demonstrated robust propagation of electromagnetic waves along reconfigurable pathways within a topological photonic metacrystal. Since the wave is confined within the domain wall, which is the boundary between two distinct topological insulating systems, we can freely steer the wave by reconstructing the photonic structure. Other topics, such as speckle pattern evolutions and the effects of boundary conditions on the statistics of transmission eigenvalues and energy profiles are also discussed.
Optimized Routing of Intelligent, Mobile Sensors for Dynamic, Data-Driven Sampling
2016-09-27
nonstationary random process that requires nonuniform sampling. The ap- proach incorporates complementary representations of an unknown process: the first...lookup table as follows. A uniform grid is created in the r-domain and mapped to the R-domain, which produces a nonuniform grid of locations in the R...vehicle coverage algorithm that invokes the coor- dinate transformation from the previous section to generate nonuniform sampling trajectories [54]. We
Sample-based estimation of tree species richness in a wet tropical forest compartment
Steen Magnussen; Raphael Pelissier
2007-01-01
Petersen's capture-recapture ratio estimator and the well-known bootstrap estimator are compared across a range of simulated low-intensity simple random sampling with fixed-area plots of 100 m? in a rich wet tropical forest compartment with 93 tree species in the Western Ghats of India. Petersen's ratio estimator was uniformly superior to the bootstrap...
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Linking of uniform random polygons in confined spaces
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.
2007-03-01
In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .
Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks
2006-09-01
time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Computationally Efficient Resampling of Nonuniform Oversampled SAR Data
2010-05-01
noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled
Analysis of Uniform Random Numbers Generated by Randu and Urn Ten Different Seeds.
The statistical properties of the numbers generated by two uniform random number generators, RANDU and URN, each using ten different seeds are...The testing is performed on a sequence of 50,000 numbers generated by each uniform random number generator using each of the ten seeds . (Author)
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Frequency position modulation using multi-spectral projections
NASA Astrophysics Data System (ADS)
Goodman, Joel; Bertoncini, Crystal; Moore, Michael; Nousain, Bryan; Cowart, Gregory
2012-10-01
In this paper we present an approach to harness multi-spectral projections (MSPs) to carefully shape and locate tones in the spectrum, enabling a new and robust modulation in which a signal's discrete frequency support is used to represent symbols. This method, called Frequency Position Modulation (FPM), is an innovative extension to MT-FSK and OFDM and can be non-uniformly spread over many GHz of instantaneous bandwidth (IBW), resulting in a communications system that is difficult to intercept and jam. The FPM symbols are recovered using adaptive projections that in part employ an analog polynomial nonlinearity paired with an analog-to-digital converter (ADC) sampling at a rate at that is only a fraction of the IBW of the signal. MSPs also facilitate using commercial of-the-shelf (COTS) ADCs with uniform-sampling, standing in sharp contrast to random linear projections by random sampling, which requires a full Nyquist rate sample-and-hold. Our novel communication system concept provides an order of magnitude improvement in processing gain over conventional LPI/LPD communications (e.g., FH- or DS-CDMA) and facilitates the ability to operate in interference laden environments where conventional compressed sensing receivers would fail. We quantitatively analyze the bit error rate (BER) and processing gain (PG) for a maximum likelihood based FPM demodulator and demonstrate its performance in interference laden conditions.
Monte Carlo Sampling in Fractal Landscapes
NASA Astrophysics Data System (ADS)
Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.
2013-05-01
We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.
Yount, Kathryn M; VanderEnde, Kristin; Zureick-Brown, Sarah; Minh, Tran Hung; Schuler, Sidney Ruth; Anh, Hoang Tu
2014-06-01
Attitudes about intimate partner violence (IPV) against women are widely surveyed, but attitudes about women's recourse after exposure to IPV are understudied, despite their importance for intervention. Designed through qualitative research and administered in a probability sample of 1,054 married men and women 18 to 50 years in My Hao District, Vietnam, the ATT-RECOURSE scale measures men's and women's attitudes about a wife's recourse after exposure to physical IPV. Data were initially collected for nine items. Exploratory factor analysis (EFA) with one random split-half sample (N 1 = 526) revealed a one-factor model with significant loadings (0.316-0.686) for six items capturing a wife's silence, informal recourse, and formal recourse. A confirmatory factor analysis (CFA) with the other random split-half sample (N 2 = 528) showed adequate fit for the six-item model and significant factor loadings of similar magnitude to the EFA results (0.412-0.669). For the six items retained, men consistently favored recourse more often than did women (52.4%-66.0% of men vs. 41.9%-55.2% of women). Tests for uniform differential item functioning (DIF) by gender revealed one item with significant uniform DIF, and adjusting for this revealed an even larger gap in men's and women's attitudes, with men favoring recourse, on average, more than women. The six-item ATT-RECOURSE scale is reliable across independent samples and exhibits little uniform DIF by gender, supporting its use in surveys of men and women. Further methodological research is discussed. Research is needed in Vietnam about why women report less favorable attitudes than men regarding women's recourse after physical IPV.
Signs of universality in the structure of culture
NASA Astrophysics Data System (ADS)
Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego
2017-11-01
Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Howlett, Cullan
2018-06-01
In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
True Randomness from Big Data.
Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang
2016-09-26
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn
NASA Astrophysics Data System (ADS)
Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han
2018-06-01
We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.
Underestimating extreme events in power-law behavior due to machine-dependent cutoffs
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2014-11-01
Power-law distributions are typical macroscopic features occurring in almost all complex systems observable in nature. As a result, researchers in quantitative analyses must often generate random synthetic variates obeying power-law distributions. The task is usually performed through standard methods that map uniform random variates into the desired probability space. Whereas all these algorithms are theoretically solid, in this paper we show that they are subject to severe machine-dependent limitations. As a result, two dramatic consequences arise: (i) the sampling in the tail of the distribution is not random but deterministic; (ii) the moments of the sample distribution, which are theoretically expected to diverge as functions of the sample sizes, converge instead to finite values. We provide quantitative indications for the range of distribution parameters that can be safely handled by standard libraries used in computational analyses. Whereas our findings indicate possible reinterpretations of numerical results obtained through flawed sampling methodologies, they also pave the way for the search for a concrete solution to this central issue shared by all quantitative sciences dealing with complexity.
Washing and changing uniforms: is guidance being adhered to?
Potter, Yvonne Camilla; Justham, David
To allay public apprehension regarding the risk of nurses' uniforms transmitting healthcare-associated infections (HCAI), national and local guidelines have been issued to control use, laundry and storage. This paper aims to measure the knowledge of registered nurses (RNs) and healthcare assistants (HCAs) working within a rural NHS foundation Trust and their adherence to the local infection prevention and control (IPC) standard regarding uniforms through a Trust-wide audit. Stratified random sampling selected 597 nursing staff and 399 responded (67%) by completing a short questionnaire based on the local standard. Responses were coded and transferred to SPSS (v. 17) for analysis. The audit found that nursing staff generally adhere to the guidelines, changing their uniforms daily and immediately upon accidental soiling, and wearing plastic aprons where indicated. At home, staff normally machine-wash and then iron their uniforms at the hottest setting. Nevertheless, few observe the local direction to place their newly-laundered uniforms in protective covers. This paper recommends a re-audit to compare compliance rates with baseline figures and further research into the reasons why compliance is lacking to sanction interventions for improvement, such as providing relevant staff education and re-introducing appropriate changing facilities.
Tablet splitting and weight uniformity of half-tablets of 4 medications in pharmacy practice.
Tahaineh, Linda M; Gharaibeh, Shadi F
2012-08-01
Tablet splitting is a common practice for multiple reasons including cost savings; however, it does not necessarily result in weight-uniform half-tablets. To determine weight uniformity of half-tablets resulting from splitting 4 products available in the Jordanian market and investigate the effect of tablet characteristics on weight uniformity of half-tablets. Ten random tablets each of warfarin 5 mg, digoxin 0.25 mg, phenobarbital 30 mg, and prednisolone 5 mg were weighed and split by 6 PharmD students using a knife. The resulting half-tablets were weighed and evaluated for weight uniformity. Other relevant physical characteristics of the 4 products were measured. The average tablet hardness of the sampled tablets ranged from 40.3 N to 68.9 N. Digoxin, phenobarbital, and prednisolone half-tablets failed the weight uniformity test; however, warfarin half-tablets passed. Digoxin, warfarin, and phenobarbital tablets had a score line and warfarin tablets had the deepest score line of 0.81 mm. Splitting warfarin tablets produces weight-uniform half-tablets that may possibly be attributed to the hardness and the presence of a deep score line. Digoxin, phenobarbital, and prednisolone tablet splitting produces highly weight variable half-tablets. This can be of clinical significance in the case of the narrow therapeutic index medication digoxin.
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Duchêne, Sebastián; Duchêne, David; Holmes, Edward C; Ho, Simon Y W
2015-07-01
Rates and timescales of viral evolution can be estimated using phylogenetic analyses of time-structured molecular sequences. This involves the use of molecular-clock methods, calibrated by the sampling times of the viral sequences. However, the spread of these sampling times is not always sufficient to allow the substitution rate to be estimated accurately. We conducted Bayesian phylogenetic analyses of simulated virus data to evaluate the performance of the date-randomization test, which is sometimes used to investigate whether time-structured data sets have temporal signal. An estimate of the substitution rate passes this test if its mean does not fall within the 95% credible intervals of rate estimates obtained using replicate data sets in which the sampling times have been randomized. We find that the test sometimes fails to detect rate estimates from data with no temporal signal. This error can be minimized by using a more conservative criterion, whereby the 95% credible interval of the estimate with correct sampling times should not overlap with those obtained with randomized sampling times. We also investigated the behavior of the test when the sampling times are not uniformly distributed throughout the tree, which sometimes occurs in empirical data sets. The test performs poorly in these circumstances, such that a modification to the randomization scheme is needed. Finally, we illustrate the behavior of the test in analyses of nucleotide sequences of cereal yellow dwarf virus. Our results validate the use of the date-randomization test and allow us to propose guidelines for interpretation of its results. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Secure uniform random-number extraction via incoherent strategies
NASA Astrophysics Data System (ADS)
Hayashi, Masahito; Zhu, Huangjun
2018-01-01
To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Improved high-dimensional prediction with Random Forests by the use of co-data.
Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A
2017-12-28
Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.
2017-02-01
This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model
NASA Astrophysics Data System (ADS)
Margarint, Vlad
2018-06-01
We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.
A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes
NASA Astrophysics Data System (ADS)
Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac
2012-11-01
In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
A fast ergodic algorithm for generating ensembles of equilateral random polygons
NASA Astrophysics Data System (ADS)
Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.
2009-03-01
Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Spectral and correlation analysis with applications to middle-atmosphere radars
NASA Technical Reports Server (NTRS)
Rastogi, Prabhat K.
1989-01-01
The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.
Semantic Importance Sampling for Statistical Model Checking
2015-01-16
SMT calls while maintaining correctness. Finally, we implement SIS in a tool called osmosis and use it to verify a number of stochastic systems with...2 surveys related work. Section 3 presents background definitions and concepts. Section 4 presents SIS, and Section 5 presents our tool osmosis . In...which I∗M|=Φ(x) = 1. We do this by first randomly selecting a cube c from C∗ with uniform probability since each cube has equal probability 9 5. OSMOSIS
Scalable randomized benchmarking of non-Clifford gates
NASA Astrophysics Data System (ADS)
Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay
Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
Bürger, Kai; Krüger, Jens; Westermann, Rüdiger
2011-01-01
In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)—an extension of the Layered Depth Cube—as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. PMID:20616392
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Yang, Yan; Wu, Xi; Li, Qiang; Jiao, Shu-fang; Li, Xun; Li, Xin-jian; Zhu, Guo-ping; Du, Lin; Zhao, Jian-hua; Jiang, Yuan; Feng, Guo-ze
2009-04-01
To know the situation of tobacco advertisement, promotions and related factors in six cities in China. 4815 adults (above 18 years), selected form Beijing, Shanghai, Shenyang, Changsha, Guangzhou and Yinchuan through probability proportionate sampling and simple random sampling, were investigated through questionnaires. The most commonly reported channels that smokers noticed tobacco advertisements were billboards (35.6%) and television (34.4%). The most commonly reported tobacco promotional activities that were noticed by smokers were free gifts when buying cigarettes (23.1%) and free samples of cigarettes (13.9%). Smokers in Changsha were more likely to report noticing tobacco advertisement on billboards (chi2 = 562.474, P < 0.00 1), and on television (chi2 = 265.570, P < 0.001). Smokers in Changsha (chi2 = 58.314, P < 0.001) were more likely to notice tobacco related news and games. A logistic regression analysis showed that the living and education level were related to awareness of tobacco advertisement and promotion. It was universal to see tobacco advertisement and promotions in cities in China but the laws and regulations about tobacco-control were not uniformly executed in different cities. It is necessary to perfect and uniform related laws and regulations.
ERIC Educational Resources Information Center
Bhattacharyya, Pratip; Chakrabarti, Bikas K.
2008-01-01
We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…
NASA Technical Reports Server (NTRS)
Kaljurand, M.; Valentin, J. R.; Shao, M.
1996-01-01
Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.
A cross-sectional investigation of the quality of selected medicines in Cambodia in 2010
2014-01-01
Background Access to good-quality medicines in many countries is largely hindered by the rampant circulation of spurious/falsely labeled/falsified/counterfeit (SFFC) and substandard medicines. In 2006, the Ministry of Health of Cambodia, in collaboration with Kanazawa University, Japan, initiated a project to combat SFFC medicines. Methods To assess the quality of medicines and prevalence of SFFC medicines among selected products, a cross-sectional survey was carried out in Cambodia. Cefixime, omeprazole, co-trimoxazole, clarithromycin, and sildenafil were selected as candidate medicines. These medicines were purchased from private community drug outlets in the capital, Phnom Penh, and Svay Rieng and Kandal provinces through a stratified random sampling scheme in July 2010. Results In total, 325 medicine samples were collected from 111 drug outlets. Non-licensed outlets were more commonly encountered in rural than in urban areas (p < 0.01). Of all the samples, 93.5% were registered and 80% were foreign products. Samples without registration numbers were found more frequently among foreign-manufactured products than in domestic ones (p < 0.01). According to pharmacopeial analytical results, 14.5%, 4.6%, and 24.6% of the samples were unacceptable in quantity, content uniformity, and dissolution test, respectively. All the ultimately unacceptable samples in the content uniformity tests were of foreign origin. Following authenticity investigations conducted with the respective manufacturers and medicine regulatory authorities, an unregistered product of cefixime collected from a pharmacy was confirmed as an SFFC medicine. However, the sample was acceptable in quantity, content uniformity, and dissolution test. Conclusions The results of this survey indicate that medicine counterfeiting is not limited to essential medicines in Cambodia: newer-generation medicines are also targeted. Concerted efforts by both domestic and foreign manufacturers, wholesalers, retailers, and regulatory authorities should help improve the quality of medicines. PMID:24593851
Modeling of chromosome intermingling by partially overlapping uniform random polygons.
Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J
2011-03-01
During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-22
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-01
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612
Local self-uniformity in photonic networks.
Sellers, Steven R; Man, Weining; Sahba, Shervin; Florescu, Marian
2017-02-17
The interaction of a material with light is intimately related to its wavelength-scale structure. Simple connections between structure and optical response empower us with essential intuition to engineer complex optical functionalities. Here we develop local self-uniformity (LSU) as a measure of a random network's internal structural similarity, ranking networks on a continuous scale from crystalline, through glassy intermediate states, to chaotic configurations. We demonstrate that complete photonic bandgap structures possess substantial LSU and validate LSU's importance in gap formation through design of amorphous gyroid structures. Amorphous gyroid samples are fabricated via three-dimensional ceramic printing and the bandgaps experimentally verified. We explore also the wing-scale structuring in the butterfly Pseudolycaena marsyas and show that it possesses substantial amorphous gyroid character, demonstrating the subtle order achieved by evolutionary optimization and the possibility of an amorphous gyroid's self-assembly.
Local self-uniformity in photonic networks
NASA Astrophysics Data System (ADS)
Sellers, Steven R.; Man, Weining; Sahba, Shervin; Florescu, Marian
2017-02-01
The interaction of a material with light is intimately related to its wavelength-scale structure. Simple connections between structure and optical response empower us with essential intuition to engineer complex optical functionalities. Here we develop local self-uniformity (LSU) as a measure of a random network's internal structural similarity, ranking networks on a continuous scale from crystalline, through glassy intermediate states, to chaotic configurations. We demonstrate that complete photonic bandgap structures possess substantial LSU and validate LSU's importance in gap formation through design of amorphous gyroid structures. Amorphous gyroid samples are fabricated via three-dimensional ceramic printing and the bandgaps experimentally verified. We explore also the wing-scale structuring in the butterfly Pseudolycaena marsyas and show that it possesses substantial amorphous gyroid character, demonstrating the subtle order achieved by evolutionary optimization and the possibility of an amorphous gyroid's self-assembly.
Random isotropic one-dimensional XY-model
NASA Astrophysics Data System (ADS)
Gonçalves, L. L.; Vieira, A. P.
1998-01-01
The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .
Asymptotic laws for random knot diagrams
NASA Astrophysics Data System (ADS)
Chapman, Harrison
2017-06-01
We study random knotting by considering knot and link diagrams as decorated, (rooted) topological maps on spheres and pulling them uniformly from among sets of a given number of vertices n, as first established in recent work with Cantarella and Mastin. The knot diagram model is an exciting new model which captures both the random geometry of space curve models of knotting as well as the ease of computing invariants from diagrams. We prove that unknot diagrams are asymptotically exponentially rare, an analogue of Sumners and Whittington’s landmark result for self-avoiding polygons. Our proof uses the same key idea: we first show that knot diagrams obey a pattern theorem, which describes their fractal structure. We examine how quickly this behavior occurs in practice. As a consequence, almost all diagrams are asymmetric, simplifying sampling from this model. We conclude with experimental data on knotting in this model. This model of random knotting is similar to those studied by Diao et al, and Dunfield et al.
Hazard Function Estimation with Cause-of-Death Data Missing at Random
Wang, Qihua; Dinse, Gregg E.; Liu, Chunling
2010-01-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874
Emergence of an optimal search strategy from a simple random walk
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-01-01
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445
Emergence of an optimal search strategy from a simple random walk.
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-09-06
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.
Spatial pattern of Baccharis platypoda shrub as determined by sex and life stages
NASA Astrophysics Data System (ADS)
Fonseca, Darliana da Costa; de Oliveira, Marcio Leles Romarco; Pereira, Israel Marinho; Gonzaga, Anne Priscila Dias; de Moura, Cristiane Coelho; Machado, Evandro Luiz Mendonça
2017-11-01
Spatial patterns of dioecious species can be determined by their nutritional requirements and intraspecific competition, apart from being a response to environmental heterogeneity. The aim of the study was to evaluate the spatial pattern of populations of a dioecious shrub reporting to sex and reproductive stage patterns of individuals. Sampling was carried out in three areas located in the meridional portion of Serra do Espinhaço, where in individuals of the studied species were mapped. The spatial pattern was determined through O-ring analysis and Ripley's K-function and the distribution of individuals' frequencies was verified through x2 test. Populations in two areas showed an aggregate spatial pattern tending towards random or uniform according to the observed scale. Male and female adults presented an aggregate pattern at smaller scales, while random and uniform patterns were verified above 20 m for individuals of both sexes of the areas A2 and A3. Young individuals presented an aggregate pattern in all areas and spatial independence in relation to adult individuals, especially female plants. The interactions between individuals of both genders presented spatial independence with respect to spatial distribution. Baccharis platypoda showed characteristics in accordance with the spatial distribution of savannic and dioecious species, whereas the population was aggregated tending towards random at greater spatial scales. Young individuals showed an aggregated pattern at different scales compared to adults, without positive association between them. Female and male adult individuals presented similar characteristics, confirming that adult individuals at greater scales are randomly distributed despite their distinct preferences for environments with moisture variation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less
NASA Astrophysics Data System (ADS)
Amirnasr, Elham
It is widely recognized that nonwoven basis weight non-uniformity affects various properties of nonwovens. However, few studies can be found in this topic. The development of uniformity definition and measurement methods and the study of their impact on various web properties such as filtration properties and air permeability would be beneficial both in industrial applications and in academia. They can be utilized as a quality control tool and would provide insights about nonwoven behaviors that cannot be solely explained by average values. Therefore, for quantifying nonwoven web basis weight uniformity we purse to develop an optical analytical tool. The quadrant method and clustering analysis was utilized in an image analysis scheme to help define "uniformity" and its spatial variation. Implementing the quadrant method in an image analysis system allows the establishment of a uniformity index that can be used to quantify the degree of uniformity. Clustering analysis has also been modified and verified using uniform and random simulated images with known parameters. Number of clusters and cluster properties such as cluster size, member and density was determined. We also utilized this new measurement method to evaluate uniformity of nonwovens produced with different processes and investigated impacts of uniformity on filtration and permeability. The results of quadrant method shows that uniformity index computed from quadrant method demonstrate a good range for non-uniformity of nonwoven webs. Clustering analysis is also been applied on reference nonwoven with known visual uniformity. From clustering analysis results, cluster size is promising to be used as uniformity parameter. It is been shown that non-uniform nonwovens has provide lager cluster size than uniform nonwovens. It was been tried to find a relationship between web properties and uniformity index (as a web characteristic). To achieve this, filtration properties, air permeability, solidity and uniformity index of meltblown and spunbond samples was measured. Results for filtration test show some deviation between theoretical and experimental filtration efficiency by considering different types of fiber diameter. This deviation can occur due to variation in basis weight non-uniformity. So an appropriate theory is required to predict the variation of filtration efficiency with respect to non-uniformity of nonwoven filter media. And the results for air permeability test showed that uniformity index determined by quadrant method and measured properties have some relationship. In the other word, air permeability decreases as uniformity index on nonwoven web increase.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Expert Assessment of Stigmergy: A Report for the Department of National Defence
2005-10-01
pheromone table may be reduced by implementing a clustering scheme. Termite can take advantage of the wireless broadcast medium, since it is possible for...comparing it with any other routing scheme. The Termite scheme [RW] differs from the source routing [ITT] by applying pheromone trails or random walks...rather than uniform or probabilistic ones. Random walk ants differ from uniform ants since they follow pheromone trails, if any. Termite [RW] also
Li, Jianghong; Valente, Thomas W; Shin, Hee-Sung; Weeks, Margaret; Zelenev, Alexei; Moothi, Gayatri; Mosher, Heather; Heimer, Robert; Robles, Eduardo; Palmer, Greg; Obidoa, Chinekwu
2017-06-28
Intensive sociometric network data were collected from a typical respondent driven sample (RDS) of 528 people who inject drugs residing in Hartford, Connecticut in 2012-2013. This rich dataset enabled us to analyze a large number of unobserved network nodes and ties for the purpose of assessing common assumptions underlying RDS estimators. Results show that several assumptions central to RDS estimators, such as random selection, enrollment probability proportional to degree, and recruitment occurring over recruiter's network ties, were violated. These problems stem from an overly simplistic conceptualization of peer recruitment processes and dynamics. We found nearly half of participants were recruited via coupon redistribution on the street. Non-uniform patterns occurred in multiple recruitment stages related to both recruiter behavior (choosing and reaching alters, passing coupons, etc.) and recruit behavior (accepting/rejecting coupons, failing to enter study, passing coupons to others). Some factors associated with these patterns were also associated with HIV risk.
Uniform field loop-gap resonator and rectangular TEU02 for aqueous sample EPR at 94 GHz
NASA Astrophysics Data System (ADS)
Sidabras, Jason W.; Sarna, Tadeusz; Mett, Richard R.; Hyde, James S.
2017-09-01
In this work we present the design and implementation of two uniform-field resonators: a seven-loop-six-gap loop-gap resonator (LGR) and a rectangular TEU02 cavity resonator. Each resonator has uniform-field-producing end-sections. These resonators have been designed for electron paramagnetic resonance (EPR) of aqueous samples at 94 GHz. The LGR geometry employs low-loss Rexolite end-sections to improve the field homogeneity over a 3 mm sample region-of-interest from near-cosine distribution to 90% uniform. The LGR was designed to accommodate large degassable Polytetrafluorethylen (PTFE) tubes (0.81 mm O.D.; 0.25 mm I.D.) for aqueous samples. Additionally, field modulation slots are designed for uniform 100 kHz field modulation incident at the sample. Experiments using a point sample of lithium phthalocyanine (LiPC) were performed to measure both the uniformity of the microwave magnetic field and 100 kHz field modulation, and confirm simulations. The rectangular TEU02 cavity resonator employs over-sized end-sections with sample shielding to provide an 87% uniform field for a 0.1 × 2 × 6 mm3 sample geometry. An evanescent slotted window was designed for light access to irradiate 90% of the sample volume. A novel dual-slot iris was used to minimize microwave magnetic field perturbations and maintain cross-sectional uniformity. Practical EPR experiments using the application of light irradiated rose bengal (4,5,6,7-tetrachloro-2‧,4‧,5‧,7‧-tetraiodofluorescein) were performed in the TEU02 cavity. The implementation of these geometries providing a practical designs for uniform field resonators that continue resonator advancements towards quantitative EPR spectroscopy.
Carrell, Douglas T; Cartmill, Deborah; Jones, Kirtly P; Hatasaka, Harry H; Peterson, C Matthew
2002-07-01
To evaluate variability in donor semen quality between seven commercial donor sperm banks, within sperm banks, and between intracervical insemination and intrauterine insemination. Prospective, randomized, blind evaluation of commercially available donor semen samples. An academic andrology laboratory. Seventy-five cryopreserved donor semen samples were evaluated. Samples were coded, then blindly evaluated for semen quality. Standard semen quality parameters, including concentration, motility parameters, World Health Organization criteria morphology, and strict criteria morphology. Significant differences were observed between donor semen banks for most semen quality parameters analyzed in intracervical insemination samples. In general, the greatest variability observed between banks was in percentage progressive sperm motility (range, 8.8 +/- 5.8 to 42.4 +/- 5.5) and normal sperm morphology (strict criteria; range, 10.1 +/- 3.3 to 26.6 +/- 4.7). Coefficients of variation within sperm banks were generally high. These data demonstrate the variability of donor semen quality provided by commercial sperm banks, both between banks and within a given bank. No relationship was observed between the size or type of sperm bank and the degree of variability. The data demonstrate the lack of uniformity in the criteria used to screen potential semen donors and emphasize the need for more stringent screening criteria and strict quality control in processing samples.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
NASA Astrophysics Data System (ADS)
Chuang, Kai-Chi; Chung, Hao-Tung; Chu, Chi-Yan; Luo, Jun-Dao; Li, Wei-Shuo; Li, Yi-Shao; Cheng, Huang-Chung
2018-06-01
An AlO x layer was deposited on HfO x , and bilayered dielectric films were found to confine the formation locations of conductive filaments (CFs) during the forming process and then improve device-to-device uniformity. In addition, the Ti interposing layer was also adopted to facilitate the formation of oxygen vacancies. As a result, the resistive random access memory (RRAM) device with TiN/Ti/AlO x (1 nm)/HfO x (6 nm)/TiN stack layers demonstrated excellent device-to-device uniformity although it achieved slightly larger resistive switching characteristics, which were forming voltage (V Forming) of 2.08 V, set voltage (V Set) of 1.96 V, and reset voltage (V Reset) of ‑1.02 V, than the device with TiN/Ti/HfO x (6 nm)/TiN stack layers. However, the device with a thicker 2-nm-thick AlO x layer showed worse uniformity than the 1-nm-thick one. It was attributed to the increased oxygen atomic percentage in the bilayered dielectric films of the 2-nm-thick one. The difference in oxygen content showed that there would be less oxygen vacancies to form CFs. Therefore, the random growth of CFs would become severe and the device-to-device uniformity would degrade.
The effect of precursor types on the magnetic properties of Y-type hexa-ferrite composite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Chin Mo; Na, Eunhye; Kim, Ingyu
2015-05-07
With magnetic composite including uniform magnetic particles, we expect to realize good high-frequency soft magnetic properties. We produced needle-like (α-FeOOH) nanoparticles with nearly uniform diameter and length of 20 and 500 nm. Zn-doped Y-type hexa-ferrite samples were prepared by solid state reaction method using the uniform goethite and non-uniform hematite (Fe{sub 2}O{sub 3}) with size of <1 μm, respectively. The micrographs observed by scanning electron microscopy show that more uniform hexagonal plates are observed in ZYG-sample (Zn-doped Y-type hexa-ferrite prepared with non-uniform hematite) than in ZYH-sample (Zn-doped Y-type hexa-ferrite prepared with uniform goethite). The permeability (μ′) and loss tangent (δ) atmore » 2 GHz are 2.31 and 0.07 in ZYG-sample and 2.0 and 0.07 in ZYH sample, respectively. We can observe that permeability and loss tangent are strongly related to the particle size and uniformity based on the nucleation, growth, and two magnetizing mechanisms: spin rotation and domain wall motion. The complex permeability spectra also can be numerically separated into spin rotational and domain wall resonance components.« less
Response of moderately thick laminated cross-ply composite shells subjected to random excitation
NASA Technical Reports Server (NTRS)
Elishakoff, Isaak; Cederbaum, Gabriel; Librescu, Liviu
1989-01-01
This study deals with the dynamic response of transverse shear deformable laminated shells subjected to random excitation. The analysis encompasses the following problems: (1) the dynamic response of circular cylindrical shells of finite length excited by an axisymmetric uniform ring loading, stationary in time, and (2) the response of spherical and cylindrical panels subjected to stationary random loadings with uniform spatial distribution. The associated equations governing the structural theory of shells are derived upon discarding the classical Love-Kirchhoff (L-K) assumptions. In this sense, the theory is formulated in the framework of the first-order transverse shear deformation theory (FSDT).
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders
2014-09-21
Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.
NASA Astrophysics Data System (ADS)
Villegas, Fernanda; Tilly, Nina; Bäckström, Gloria; Ahnesjö, Anders
2014-09-01
Analysing the pattern of energy depositions may help elucidate differences in the severity of radiation-induced DNA strand breakage for different radiation qualities. It is often claimed that energy deposition (ED) sites from photon radiation form a uniform random pattern, but there is indication of differences in RBE values among different photon sources used in brachytherapy. The aim of this work is to analyse the spatial patterns of EDs from 103Pd, 125I, 192Ir, 137Cs sources commonly used in brachytherapy and a 60Co source as a reference radiation. The results suggest that there is both a non-uniform and a uniform random component to the frequency distribution of distances to the nearest neighbour ED. The closest neighbouring EDs show high spatial correlation for all investigated radiation qualities, whilst the uniform random component dominates for neighbours with longer distances for the three higher mean photon energy sources (192Ir, 137Cs, and 60Co). The two lower energy photon emitters (103Pd and 125I) present a very small uniform random component. The ratio of frequencies of clusters with respect to 60Co differs up to 15% for the lower energy sources and less than 2% for the higher energy sources when the maximum distance between each pair of EDs is 2 nm. At distances relevant to DNA damage, cluster patterns can be differentiated between the lower and higher energy sources. This may be part of the explanation to the reported difference in RBE values with initial DSB yields as an endpoint for these brachytherapy sources.
Comparing the Performance of Japan's Earthquake Hazard Maps to Uniform and Randomized Maps
NASA Astrophysics Data System (ADS)
Brooks, E. M.; Stein, S. A.; Spencer, B. D.
2015-12-01
The devastating 2011 magnitude 9.1 Tohoku earthquake and the resulting shaking and tsunami were much larger than anticipated in earthquake hazard maps. Because this and all other earthquakes that caused ten or more fatalities in Japan since 1979 occurred in places assigned a relatively low hazard, Geller (2011) argued that "all of Japan is at risk from earthquakes, and the present state of seismological science does not allow us to reliably differentiate the risk level in particular geographic areas," so a map showing uniform hazard would be preferable to the existing map. Defenders of the maps countered by arguing that these earthquakes are low-probability events allowed by the maps, which predict the levels of shaking that should expected with a certain probability over a given time. Although such maps are used worldwide in making costly policy decisions for earthquake-resistant construction, how well these maps actually perform is unknown. We explore this hotly-contested issue by comparing how well a 510-year-long record of earthquake shaking in Japan is described by the Japanese national hazard (JNH) maps, uniform maps, and randomized maps. Surprisingly, as measured by the metric implicit in the JNH maps, i.e. that during the chosen time interval the predicted ground motion should be exceeded only at a specific fraction of the sites, both uniform and randomized maps do better than the actual maps. However, using as a metric the squared misfit between maximum observed shaking and that predicted, the JNH maps do better than uniform or randomized maps. These results indicate that the JNH maps are not performing as well as expected, that what factors control map performance is complicated, and that learning more about how maps perform and why would be valuable in making more effective policy.
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.
Ellenberger, David; Friede, Tim
2016-08-05
Methods for change point (also sometimes referred to as threshold or breakpoint) detection in binary sequences are not new and were introduced as early as 1955. Much of the research in this area has focussed on asymptotic and exact conditional methods. Here we develop an exact unconditional test. An unconditional exact test is developed which assumes the total number of events as random instead of conditioning on the number of observed events. The new test is shown to be uniformly more powerful than Worsley's exact conditional test and means for its efficient numerical calculations are given. Adaptions of methods by Berger and Boos are made to deal with the issue that the unknown event probability imposes a nuisance parameter. The methods are compared in a Monte Carlo simulation study and applied to a cohort of patients undergoing traumatic orthopaedic surgery involving external fixators where a change in pin site infections is investigated. The unconditional test controls the type I error rate at the nominal level and is uniformly more powerful than (or to be more precise uniformly at least as powerful as) Worsley's exact conditional test which is very conservative for small sample sizes. In the application a beneficial effect associated with the introduction of a new treatment procedure for pin site care could be revealed. We consider the new test an effective and easy to use exact test which is recommended in small sample size change point problems in binary sequences.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Enhanced hyperuniformity from random reorganization.
Hexner, Daniel; Chaikin, Paul M; Levine, Dov
2017-04-25
Diffusion relaxes density fluctuations toward a uniform random state whose variance in regions of volume [Formula: see text] scales as [Formula: see text] Systems whose fluctuations decay faster, [Formula: see text] with [Formula: see text], are called hyperuniform. The larger [Formula: see text], the more uniform, with systems like crystals achieving the maximum value: [Formula: see text] Although finite temperature equilibrium dynamics will not yield hyperuniform states, driven, nonequilibrium dynamics may. Such is the case, for example, in a simple model where overlapping particles are each given a small random displacement. Above a critical particle density [Formula: see text], the system evolves forever, never finding a configuration where no particles overlap. Below [Formula: see text], however, it eventually finds such a state, and stops evolving. This "absorbing state" is hyperuniform up to a length scale [Formula: see text], which diverges at [Formula: see text] An important question is whether hyperuniformity survives noise and thermal fluctuations. We find that hyperuniformity of the absorbing state is not only robust against noise, diffusion, or activity, but that such perturbations reduce fluctuations toward their limiting behavior, [Formula: see text], a uniformity similar to random close packing and early universe fluctuations, but with arbitrary controllable density.
32 CFR Appendix B to Part 104 - Sample Employer Notification of Uniformed Service
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Sample Employer Notification of Uniformed Service B Appendix B to Part 104 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE... MEMBERS AND FORMER SERVICE MEMBERS OF THE UNIFORMED SERVICES Pt. 104, App. B Appendix B to Part 104—Sample...
32 CFR Appendix B to Part 104 - Sample Employer Notification of Uniformed Service
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 1 2011-07-01 2011-07-01 false Sample Employer Notification of Uniformed Service B Appendix B to Part 104 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE... MEMBERS AND FORMER SERVICE MEMBERS OF THE UNIFORMED SERVICES Pt. 104, App. B Appendix B to Part 104—Sample...
Asteroid orbital inversion using uniform phase-space sampling
NASA Astrophysics Data System (ADS)
Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.
2014-07-01
We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.
Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei
2018-02-21
Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.
Application of Raman spectroscopy for on-line monitoring of low dose blend uniformity.
Hausman, Debra S; Cambron, R Thomas; Sakr, Adel
2005-07-14
On-line Raman spectroscopy was used to evaluate the effect of blending time on low dose, 1%, blend uniformity of azimilide dihydrochloride. An 8 qt blender was used for the experiments and instrumented with a Raman probe through the I-bar port. The blender was slowed to 6.75 rpm to better illustrate the blending process (normal speed is 25 rpm). Uniformity was reached after 20 min of blending at 6.75 rpm (135 revolutions or 5.4 min at 25 rpm). On-line Raman analysis of blend uniformity provided more benefits than traditional thief sampling and off-line analysis. On-line Raman spectroscopy enabled generating data rich blend profiles, due to the ability to collect a large number of samples during the blending process (sampling every 20s). In addition, the Raman blend profile was rapidly generated, compared to the lengthy time to complete a blend profile with thief sampling and off-line analysis. The on-line Raman blend uniformity results were also significantly correlated (p-value < 0.05) to the HPLC uniformity results of thief samples.
Acquisition of STEM Images by Adaptive Compressive Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash
Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less
Tanner, Bertrand C.W.; McNabb, Mark; Palmer, Bradley M.; Toth, Michael J.; Miller, Mark S.
2014-01-01
Diminished skeletal muscle performance with aging, disuse, and disease may be partially attributed to the loss of myofilament proteins. Several laboratories have found a disproportionate loss of myosin protein content relative to other myofilament proteins, but due to methodological limitations, the structural manifestation of this protein loss is unknown. To investigate how variations in myosin content affect ensemble cross-bridge behavior and force production we simulated muscle contraction in the half-sarcomere as myosin was removed either i) uniformly, from the Z-line end of thick-filaments, or ii) randomly, along the length of thick-filaments. Uniform myosin removal decreased force production, showing a slightly steeper force-to-myosin content relationship than the 1:1 relationship that would be expected from the loss of cross-bridges. Random myosin removal also decreased force production, but this decrease was less than observed with uniform myosin loss, largely due to increased myosin attachment time (ton) and fractional cross-bridge binding with random myosin loss. These findings support our prior observations that prolonged ton may augment force production in single fibers with randomly reduced myosin content from chronic heart failure patients. These simulation also illustrate that the pattern of myosin loss along thick-filaments influences ensemble cross-bridge behavior and maintenance of force throughout the sarcomere. PMID:24486373
IndeCut evaluates performance of network motif discovery algorithms.
Ansariola, Mitra; Megraw, Molly; Koslicki, David
2018-05-01
Genomic networks represent a complex map of molecular interactions which are descriptive of the biological processes occurring in living cells. Identifying the small over-represented circuitry patterns in these networks helps generate hypotheses about the functional basis of such complex processes. Network motif discovery is a systematic way of achieving this goal. However, a reliable network motif discovery outcome requires generating random background networks which are the result of a uniform and independent graph sampling method. To date, there has been no method to numerically evaluate whether any network motif discovery algorithm performs as intended on realistically sized datasets-thus it was not possible to assess the validity of resulting network motifs. In this work, we present IndeCut, the first method to date that characterizes network motif finding algorithm performance in terms of uniform sampling on realistically sized networks. We demonstrate that it is critical to use IndeCut prior to running any network motif finder for two reasons. First, IndeCut indicates the number of samples needed for a tool to produce an outcome that is both reproducible and accurate. Second, IndeCut allows users to choose the tool that generates samples in the most independent fashion for their network of interest among many available options. The open source software package is available at https://github.com/megrawlab/IndeCut. megrawm@science.oregonstate.edu or david.koslicki@math.oregonstate.edu. Supplementary data are available at Bioinformatics online.
Development of a methodology to evaluate material accountability in pyroprocess
NASA Astrophysics Data System (ADS)
Woo, Seungmin
This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the criticalmore » value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).« less
A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2010-08-01
A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.
Systematic and random variations in digital Thematic Mapper data
NASA Technical Reports Server (NTRS)
Duggin, M. J. (Principal Investigator); Sakhavat, H.
1985-01-01
Radiance recorded by any remote sensing instrument will contain noise which will consist of both systematic and random variations. Systematic variations may be due to sun-target-sensor geometry, atmospheric conditions, and the interaction of the spectral characteristics of the sensor with those of upwelling radiance. Random variations in the data may be caused by variations in the nature and in the heterogeneity of the ground cover, by variations in atmospheric transmission, and by the interaction of these variations with the sensing device. It is important to be aware of the extent of random and systematic errors in recorded radiance data across ostensibly uniform ground areas in order to assess the impact on quantative image analysis procedures for both the single date and the multidate cases. It is the intention here to examine the systematic and the random variations in digital radiance data recorded in each band by the thematic mapper over crop areas which are ostensibly uniform and which are free from visible cloud.
Pathwise upper semi-continuity of random pullback attractors along the time axis
NASA Astrophysics Data System (ADS)
Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke
2018-07-01
The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.
Characterization of total ionizing dose damage in COTS pinned photodiode CMOS image sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zujun, E-mail: wangzujun@nint.ac.cn; Ma, Wuying; Huang, Shaoyan
The characterization of total ionizing dose (TID) damage in COTS pinned photodiode (PPD) CMOS image sensors (CISs) is investigated. The radiation experiments are carried out at a {sup 60}Co γ-ray source. The CISs are produced by 0.18-μm CMOS technology and the pixel architecture is 8T global shutter pixel with correlated double sampling (CDS) based on a 4T PPD front end. The parameters of CISs such as temporal domain, spatial domain, and spectral domain are measured at the CIS test system as the EMVA 1288 standard before and after irradiation. The dark current, random noise, dark signal non-uniformity (DSNU), photo responsemore » non-uniformity (PRNU), overall system gain, saturation output, dynamic range (DR), signal to noise ratio (SNR), quantum efficiency (QE), and responsivity versus the TID are reported. The behaviors of the tested CISs show remarkable degradations after radiation. The degradation mechanisms of CISs induced by TID damage are also analyzed.« less
Characterization of total ionizing dose damage in COTS pinned photodiode CMOS image sensors
NASA Astrophysics Data System (ADS)
Wang, Zujun; Ma, Wuying; Huang, Shaoyan; Yao, Zhibin; Liu, Minbo; He, Baoping; Liu, Jing; Sheng, Jiangkun; Xue, Yuan
2016-03-01
The characterization of total ionizing dose (TID) damage in COTS pinned photodiode (PPD) CMOS image sensors (CISs) is investigated. The radiation experiments are carried out at a 60Co γ-ray source. The CISs are produced by 0.18-μm CMOS technology and the pixel architecture is 8T global shutter pixel with correlated double sampling (CDS) based on a 4T PPD front end. The parameters of CISs such as temporal domain, spatial domain, and spectral domain are measured at the CIS test system as the EMVA 1288 standard before and after irradiation. The dark current, random noise, dark signal non-uniformity (DSNU), photo response non-uniformity (PRNU), overall system gain, saturation output, dynamic range (DR), signal to noise ratio (SNR), quantum efficiency (QE), and responsivity versus the TID are reported. The behaviors of the tested CISs show remarkable degradations after radiation. The degradation mechanisms of CISs induced by TID damage are also analyzed.
Corn rootworms (Coleoptera: Chrysomelidae) in space and time
NASA Astrophysics Data System (ADS)
Park, Yong-Lak
Spatial dispersion is a main characteristic of insect populations. Dispersion pattern provides useful information for developing effective sampling and scouting programs because it affects sampling accuracy, efficiency, and precision. Insect dispersion, however, is dynamic in space and time and largely dependent upon interactions among insect, plant and environmental factors. This study investigated the spatial and temporal dynamics of corn rootworm dispersion at different spatial scales by using the global positioning system, the geographic information system, and geostatistics. Egg dispersion pattern was random or uniform in 8-ha cornfields, but could be aggregated at a smaller scale. Larval dispersion pattern was aggregated regardless of spatial scales used in this study. Soil moisture positively affected corn rootworm egg and larval dispersions. Adult dispersion tended to be aggregated during peak population period and random or uniform early and late in the season and corn plant phenology was a major factor to determine dispersion patterns. The dispersion pattern of root injury by corn rootworm larval feeding was aggregated and the degree of aggregation increased as the root injury increased within the range of root injury observed in microscale study. Between-year relationships in dispersion among eggs, larvae, adult, and environment provided a strategy that could predict potential root damage the subsequent year. The best prediction map for the subsequent year's potential root damage was the dispersion maps of adults during population peaked in the cornfield. The prediction map was used to develop site-specific pest management that can reduce chemical input and increase control efficiency by controlling pests only where management is needed. This study demonstrated the spatio-temporal dynamics of insect population and spatial interactions among insects, plants, and environment.
Processing of laser formed SiC powder
NASA Technical Reports Server (NTRS)
Haggerty, J. S.; Bowen, H. K.
1985-01-01
Superior SiC characteristics can be achieved through the use of ideal constituent powders and careful post-synthesis processing steps. High purity SiC powders of approx. 1000 A uniform diameter, nonagglomerated and spherical were produced. This required major revision of the particle formation and growth model from one based on classical nucleation and growth to one based on collision and coalescence of Si particles followed by their carburization. Dispersions based on pure organic solvents as well as steric stabilization were investigated. Although stable dispersions were formed by both, subsequent part fabrication emphasized the pure solvents since fewer problems with drying and residuals of the high purity particles were anticipated. Test parts were made by the colloidal pressing technique; both liquid filtration and consolidation (rearrangement) stages were modeled. Green densities corresponding to a random close packed structure (approx. 63%) were achieved; this highly perfect structure has a high, uniform coordination number (greater than 11) approaching the quality of an ordered structure without introducing domain boundary effects. After drying, parts were densified at temperatures ranging from 1800 to 2100 C. Optimum densification temperatures will probably be in the 1900 to 2000 C range based on these preliminary results which showed that 2050 C samples had experienced substantial grain growth. Although overfired, the 2050 C samples exhibited excellent mechanical properties. Biaxial tensile strengths up to 714 MPa and Vickers hardness values of 2430 kg/sq mm 2 were both more typical of hot pressed than sintered SiC. Both result from the absence of large defects and the confinement of residual porosity (less than 2.5%) to small diameter, uniformly distributed pores.
2017-10-01
AWARD NUMBER: W81XWH-16-1-0524 TITLE: Non-Uniformly Sampled MR Correlated Spectroscopic Imaging in Breast Cancer and Nonlinear Reconstruction...author(s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other...COVERED 30 Sep 2016 - 29 Sep 2017 5a. CONTRACT NUMBER 4. TITLE AND SUBTITLE Non-Uniformly Sampled MR Correlated Spectroscopic Imaging in Breast
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Critical currents of Nb sub 3 Sn wires for the US-DPC coil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takayasu, M.; Gung, C.Y.; Steeves, M.M.
1991-03-01
This paper evaluates the critical current of titanium-alloyed internal-tin, jelly-roll Nb{sub 3}Sn wire for use in the US-DPC coil. It was confirmed from 14 randomly-selected samples that the critical-current values were uniform and consistent: the non-copper critical-current density was approximately 700 A/mm{sup 2} at 10 T and 4.2 K in agreement with expectations. A 27-strand cable-in-conduit conductor (CICC) using the low-thermal-coefficient-of-expansion superalloy Incoloy 905 yielded a critical current 5--7% below the average value of the single-strand data.
Optimizing the LSST Dither Pattern for Survey Uniformity
NASA Astrophysics Data System (ADS)
Awan, Humna; Gawiser, Eric J.; Kurczynski, Peter; Carroll, Christopher M.; LSST Dark Energy Science Collaboration
2015-01-01
The Large Synoptic Survey Telescope (LSST) will gather detailed data of the southern sky, enabling unprecedented study of Baryonic Acoustic Oscillations, which are an important probe of dark energy. These studies require a survey with highly uniform depth, and we aim to find an observation strategy that optimizes this uniformity. We have shown that in the absence of dithering (large telescope-pointing offsets), the LSST survey will vary significantly in depth. Hence, we implemented various dithering strategies, including random and repulsive random pointing offsets and spiral patterns with the spiral reaching completion in either a few months or the entire ten-year run. We employed three different implementations of dithering strategies: a single offset assigned to all fields observed on each night, offsets assigned to each field independently whenever the field is observed, and offsets assigned to each field only when the field is observed on a new night. Our analysis reveals that large dithers are crucial to guarantee survey uniformity and that assigning dithers to each field independently whenever the field is observed significantly increases this uniformity. These results suggest paths towards an optimal observation strategy that will enable LSST to achieve its science goals.We gratefully acknowledge support from the National Science Foundation REU program at Rutgers, PHY-1263280, and the Department of Energy, DE-SC0011636.
Percolation Thresholds in Angular Grain media: Drude Directed Infiltration
NASA Astrophysics Data System (ADS)
Priour, Donald
Pores in many realistic systems are not well delineated channels, but are void spaces among grains impermeable to charge or fluid flow which comprise the medium. Sparse grain concentrations lead to permeable systems, while concentrations in excess of a critical density block bulk fluid flow. We calculate percolation thresholds in porous materials made up of randomly placed (and oriented) disks, tetrahedrons, and cubes. To determine if randomly generated finite system samples are permeable, we deploy virtual tracer particles which are scattered (e.g. specularly) by collisions with impenetrable angular grains. We hasten the rate of exploration (which would otherwise scale as ncoll1 / 2 where ncoll is the number of collisions with grains if the tracers followed linear trajectories) by considering the tracer particles to be charged in conjunction with a randomly directed uniform electric field. As in the Drude treatment, where a succession of many scattering events leads to a constant drift velocity, tracer displacements on average grow linearly in ncoll. By averaging over many disorder realizations for a variety of systems sizes, we calculate the percolation threshold and critical exponent which characterize the phase transition.
Appropriate time scales for nonlinear analyses of deterministic jump systems
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya
2011-06-01
In the real world, there are many phenomena that are derived from deterministic systems but which fluctuate with nonuniform time intervals. This paper discusses the appropriate time scales that can be applied to such systems to analyze their properties. The financial markets are an example of such systems wherein price movements fluctuate with nonuniform time intervals. However, it is common to apply uniform time scales such as 1-min data and 1-h data to study price movements. This paper examines the validity of such time scales by using surrogate data tests to ascertain whether the deterministic properties of the original system can be identified from uniform sampled data. The results show that uniform time samplings are often inappropriate for nonlinear analyses. However, for other systems such as neural spikes and Internet traffic packets, which produce similar outputs, uniform time samplings are quite effective in extracting the system properties. Nevertheless, uniform samplings often generate overlapping data, which can cause false rejections of surrogate data tests.
Electrophoretic sample insertion. [device for uniformly distributing samples in flow path
NASA Technical Reports Server (NTRS)
Mccreight, L. R. (Inventor)
1974-01-01
Two conductive screens located in the flow path of an electrophoresis sample separation apparatus are charged electrically. The sample is introduced between the screens, and the charge is sufficient to disperse and hold the samples across the screens. When the charge is terminated, the samples are uniformly distributed in the flow path. Additionally, a first separation by charged properties has been accomplished.
Sapsirisavat, Vorapot; Vongsutilers, Vorasit; Thammajaruk, Narukjaporn; Pussadee, Kanitta; Riyaten, Prakit; Kerr, Stephen; Avihingsanon, Anchalee; Phanuphak, Praphan; Ruxrungtham, Kiat
2016-01-01
Ensuring that medicines meet quality standards is mandatory for ensuring safety and efficacy. There have been occasional reports of substandard generic medicines, especially in resource-limiting settings where policies to control quality may be less rigorous. As HIV treatment in Thailand depends mostly on affordable generic antiretrovirals (ARV), we performed quality assurance testing of several generic ARV available from different sources in Thailand and a source from Vietnam. We sampled Tenofovir 300mg, Efavirenz 600mg and Lopinavir/ritonavir 200/50mg from 10 primary hospitals randomly selected from those participating in the National AIDS Program, 2 non-government organization ARV clinics, and 3 private drug stores. Quality of ARV was analyzed by blinded investigators at the Faculty of Pharmaceutical Science, Chulalongkorn University. The analysis included an identification test for drug molecules, a chemical composition assay to quantitate the active ingredients, a uniformity of mass test and a dissolution test to assess in-vitro drug release. Comparisons were made against the standards described in the WHO international pharmacopeia. A total of 42 batches of ARV from 15 sources were sampled from January-March 2015. Among those generics, 23, 17, 1, and 1 were Thai-made, Indian-made, Vietnamese-made and Chinese-made, respectively. All sampled products, regardless of manufacturers or sources, met the International Pharmacopeia standards for composition assay, mass uniformity and dissolution. Although local regulations restrict ARV supply to hospitals and clinics, samples of ARV could be bought from private drug stores even without formal prescription. Sampled generic ARVs distributed within Thailand and 1 Vietnamese pharmacy showed consistent quality. However some products were illegally supplied without prescription, highlighting the importance of dispensing ARV for treatment or prevention in facilities where continuity along the HIV treatment and care cascade is available.
Helmy, Sally A
2015-01-01
Tablet splitting is a well-established medical practice in clinical settings for multiple reasons, including cost savings and ease of swallowing. However, it does not necessarily result in weight-uniform half tablets. To (a) investigate the effect of tablet characteristics on weight and content uniformity of half tablets, resulting from splitting 16 commonly used medications in the outpatient setting and (b) provide recommendations for safe tablet-splitting prescribing practices. Ten random tablets from each of the selected medications were weighed and split by 5 volunteers (2 men and 3 women aged 25-44 years) using a knife. The selected medications were mirtazapine 30 mg, bromazepam 3 mg, oxcarbazepin 150 mg, sertraline 50 mg, carvedilol 25 mg, bisoprolol fumarate 10 mg, losartan 50 mg, digoxin 0.25 mg, amiodarone HCl 200 mg, metformin HCl 1,000 mg, glimepiride 4 mg, montelukast 10 mg, ibuprofen 600 mg, celecoxib 200 mg, meloxicam 15 mg, and sildenafil citrate 50 mg. The resulting half tablets were evaluated for weight and drug content uniformity in accordance with proxy United States Pharmacopeia (USP) specification (95%-105% for digoxin and 90%-110% for the other 15 drugs). Weight and drug content uniformity were assessed by comparing weight or drug content of the half tablets with one-half of the mean weight or drug content for all whole tablets in the sample. The percentages by which the weight and drug content of each whole tablet or half tablet differed from sample mean values were calculated. Other relevant physical characteristics of the 16 products were measured. A total of 52 of 320 half tablets (16.2%) and 48 of 320 half tablets (15.0%) fell outside of the proxy USP specification for weight and drug content, respectively. Bromazepam, carvedilol, bisoprolol, losartan, digoxin, and meloxicam half tablets failed the weight and content uniformity test; however, the half tablets for the rest of the medications passed the test. Mean percent weight loss after splitting was less than 1.5% for all drugs. Bromazepam, carvedilol, and digoxin showed the highest powdering loss during the tablet-splitting process. Tablet splitting could be safer and easier when drug- and patient-specific criteria have been met. Tablet size, shape, and hardness may also play a role in the decision to split a tablet or not. Tablets containing drugs with a wide therapeutic index and long half-life might be more suitable candidates for division. Dose variation exceeded a proxy USP specification for more than one-third of sampled half tablets of bromazepam, carvedilol, bisoprolol, and digoxin. Drug content variation in half tablets appeared to be attributed to weight variation due to fragment or powder loss during the splitting process.
Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging
Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin
2018-01-01
Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325
Total-Evidence Dating under the Fossilized Birth–Death Process
Zhang, Chi; Stadler, Tanja; Klopfstein, Seraina; Heath, Tracy A.; Ronquist, Fredrik
2016-01-01
Bayesian total-evidence dating involves the simultaneous analysis of morphological data from the fossil record and morphological and sequence data from recent organisms, and it accommodates the uncertainty in the placement of fossils while dating the phylogenetic tree. Due to the flexibility of the Bayesian approach, total-evidence dating can also incorporate additional sources of information. Here, we take advantage of this and expand the analysis to include information about fossilization and sampling processes. Our work is based on the recently described fossilized birth–death (FBD) process, which has been used to model speciation, extinction, and fossilization rates that can vary over time in a piecewise manner. So far, sampling of extant and fossil taxa has been assumed to be either complete or uniformly at random, an assumption which is only valid for a minority of data sets. We therefore extend the FBD process to accommodate diversified sampling of extant taxa, which is standard practice in studies of higher-level taxa. We verify the implementation using simulations and apply it to the early radiation of Hymenoptera (wasps, ants, and bees). Previous total-evidence dating analyses of this data set were based on a simple uniform tree prior and dated the initial radiation of extant Hymenoptera to the late Carboniferous (309 Ma). The analyses using the FBD prior under diversified sampling, however, date the radiation to the Triassic and Permian (252 Ma), slightly older than the age of the oldest hymenopteran fossils. By exploring a variety of FBD model assumptions, we show that it is mainly the accommodation of diversified sampling that causes the push toward more recent divergence times. Accounting for diversified sampling thus has the potential to close the long-discussed gap between rocks and clocks. We conclude that the explicit modeling of fossilization and sampling processes can improve divergence time estimates, but only if all important model aspects, including sampling biases, are adequately addressed. PMID:26493827
Total-Evidence Dating under the Fossilized Birth-Death Process.
Zhang, Chi; Stadler, Tanja; Klopfstein, Seraina; Heath, Tracy A; Ronquist, Fredrik
2016-03-01
Bayesian total-evidence dating involves the simultaneous analysis of morphological data from the fossil record and morphological and sequence data from recent organisms, and it accommodates the uncertainty in the placement of fossils while dating the phylogenetic tree. Due to the flexibility of the Bayesian approach, total-evidence dating can also incorporate additional sources of information. Here, we take advantage of this and expand the analysis to include information about fossilization and sampling processes. Our work is based on the recently described fossilized birth-death (FBD) process, which has been used to model speciation, extinction, and fossilization rates that can vary over time in a piecewise manner. So far, sampling of extant and fossil taxa has been assumed to be either complete or uniformly at random, an assumption which is only valid for a minority of data sets. We therefore extend the FBD process to accommodate diversified sampling of extant taxa, which is standard practice in studies of higher-level taxa. We verify the implementation using simulations and apply it to the early radiation of Hymenoptera (wasps, ants, and bees). Previous total-evidence dating analyses of this data set were based on a simple uniform tree prior and dated the initial radiation of extant Hymenoptera to the late Carboniferous (309 Ma). The analyses using the FBD prior under diversified sampling, however, date the radiation to the Triassic and Permian (252 Ma), slightly older than the age of the oldest hymenopteran fossils. By exploring a variety of FBD model assumptions, we show that it is mainly the accommodation of diversified sampling that causes the push toward more recent divergence times. Accounting for diversified sampling thus has the potential to close the long-discussed gap between rocks and clocks. We conclude that the explicit modeling of fossilization and sampling processes can improve divergence time estimates, but only if all important model aspects, including sampling biases, are adequately addressed. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Radar Doppler Processing with Nonuniform Sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.
2017-07-01
Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
Clasen, Thomas; Garcia Parra, Gloria; Boisson, Sophie; Collin, Simon
2005-10-01
Household water treatment is increasingly recognized as an effective means of reducing the burden of diarrheal disease among low-income populations without access to safe water. Oxfam GB undertook a pilot project to explore the use of household-based ceramic water filters in three remote communities in Colombia. In a randomized, controlled trial over a period of six months, the filters were associated with a 75.3% reduction in arithmetic mean thermotolerant coliforms (TTCs) (P < 0.0001). A total of 47.7% and 24.2% of the samples from the intervention group had no detectible TTCs/100 mL or conformed to World Health Organization limits for low risk (1-10 TTCs/100 mL), respectively, compared with 0.9% and 7.3% for control group samples. Overall, prevalence of diarrhea was 60% less among households using filters than among control households (odds ratio = 0.40, 95% confidence interval = 0.25, 0.63, P < 0.0001). However, the microbiologic performance and protective effect of the filters was not uniform throughout the study communities, suggesting the need to consider the circumstances of the particular setting before implementing this intervention.
Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon
2016-05-01
The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.
The effect of uniform color on judging athletes' aggressiveness, fairness, and chance of winning.
Krenn, Bjoern
2015-04-01
In the current study we questioned the impact of uniform color in boxing, taekwondo and wrestling. On 18 photos showing two athletes competing, the hue of each uniform was modified to blue, green or red. For each photo, six color conditions were generated (blue-red, blue-green, green-red and vice versa). In three experiments these 108 photos were randomly presented. Participants (N = 210) had to select the athlete that seemed to be more aggressive, fairer or more likely to win the fight. Results revealed that athletes wearing red in boxing and wrestling were judged more aggressive and more likely to win than athletes wearing blue or green uniforms. In addition, athletes wearing green were judged fairer in boxing and wrestling than athletes wearing red. In taekwondo we did not find any significant impact of uniform color. Results suggest that uniform color in combat sports carries specific meanings that affect others' judgments.
NASA Astrophysics Data System (ADS)
Sun, Jian F.; Liu, Xuan; Guo, Zhi R.; Dong, Jian; Huang, Yawen; Zhang, Jie; Jin, Hui; Gu, Ning
2017-02-01
Due to the intrinsic lack of specific biomarkers, there is an increasing demand for degenerative diseases to develop a testing method independent upon the targeting biomolecules. In this paper, we proposed a novel idea for this issue which was to analyze the characteristic information of metabolites with Raman spectrum. First, we achieved the fabrication of stable, uniform and reproducible substrate to enhance the Raman signals, which is crucial to the following analysis of information. This idea was confirmed with the osteoporosis-modeled mice. Furthermore, the testing results with clinical samples also preliminarily exhibited the feasibility of this strategy. The substrate to enhance Raman signal was fabricated by the layer-by-layer assembly of Au nanoparticles. The osteoporosis modeling was made by bilateral ovariectomy. Ten female mice were randomly divided into two groups. The urine and dejecta samples of mice were collected every week. Clinic urine samples were collected from patients with osteoporosis while the controlled samples were from the young students in our university. The LBL-assembled substrate of Au nanoparticles was uniform, stable and reproducible to significantly enhance the Raman signals from tiny amount of samples. With a simple data processing technique, the Raman signal-based method can effectively reflect the development of osteoporosis by comparison with micro-CT characterization. Moreover, the Raman signal from samples of clinic patients also showed the obvious difference with that of the control. Raman spectrum may be a good tool to convey the pathological information of metabolites in molecular level. Our results manifested that the information-based testing is possibly feasible and promising. Our strategy utilizes the characteristic information rather than the biological recognition to test the diseases which are difficult to find specific biomarkers. This will be greatly beneficial to the prevention and diagnosis of degenerative diseases. Also, we believe the combination of big bio-data and characteristic recognition will change the current paradigm of medical diagnosis essentially.
Hancock, Bruno C; Ketterhagen, William R
2011-10-14
Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.
Marien, Koen M.; Andries, Luc; De Schepper, Stefanie; Kockx, Mark M.; De Meyer, Guido R.Y.
2015-01-01
Tumor angiogenesis is measured by counting microvessels in tissue sections at high power magnification as a potential prognostic or predictive biomarker. Until now, regions of interest1 (ROIs) were selected by manual operations within a tumor by using a systematic uniform random sampling2 (SURS) approach. Although SURS is the most reliable sampling method, it implies a high workload. However, SURS can be semi-automated and in this way contribute to the development of a validated quantification method for microvessel counting in the clinical setting. Here, we report a method to use semi-automated SURS for microvessel counting: • Whole slide imaging with Pannoramic SCAN (3DHISTECH) • Computer-assisted sampling in Pannoramic Viewer (3DHISTECH) extended by two self-written AutoHotkey applications (AutoTag and AutoSnap) • The use of digital grids in Photoshop® and Bridge® (Adobe Systems) This rapid procedure allows traceability essential for high throughput protein analysis of immunohistochemically stained tissue. PMID:26150998
Effect of particle size distribution on permeability in the randomly packed porous media
NASA Astrophysics Data System (ADS)
Markicevic, Bojan
2017-11-01
An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.
The Impact of Hospital Size on CMS Hospital Profiling.
Sosunov, Eugene A; Egorova, Natalia N; Lin, Hung-Mo; McCardle, Ken; Sharma, Vansh; Gelijns, Annetine C; Moskowitz, Alan J
2016-04-01
The Centers for Medicare & Medicaid Services (CMS) profile hospitals using a set of 30-day risk-standardized mortality and readmission rates as a basis for public reporting. These measures are affected by hospital patient volume, raising concerns about uniformity of standards applied to providers with different volumes. To quantitatively determine whether CMS uniformly profile hospitals that have equal performance levels but different volumes. Retrospective analysis of patient-level and hospital-level data using hierarchical logistic regression models with hospital random effects. Simulation of samples including a subset of hospitals with different volumes but equal poor performance (hospital effects=+3 SD in random-effect logistic model). A total of 1,085,568 Medicare fee-for-service patients undergoing 1,494,993 heart failure admissions in 4930 hospitals between July 1, 2005 and June 30, 2008. CMS methodology was used to determine the rank and proportion (by volume) of hospitals reported to perform "Worse than US National Rate." Percent of hospitals performing "Worse than US National Rate" was ∼40 times higher in the largest (fifth quintile by volume) compared with the smallest hospitals (first quintile). A similar gradient was seen in a cohort of 100 hospitals with simulated equal poor performance (0%, 0%, 5%, 20%, and 85% in quintiles 1 to 5) effectively leaving 78% of poor performers undetected. Our results illustrate the disparity of impact that the current CMS method of hospital profiling has on hospitals with higher volumes, translating into lower thresholds for detection and reporting of poor performance.
The uniformity and imaging properties of some new ceramic scintillators
NASA Astrophysics Data System (ADS)
Chac, George T. L.; Miller, Brian W.; Shah, Kanai; Baldoni, Gary; Domanik, Kenneth J.; Bora, Vaibhav; Cherepy, Nerine J.; Seeley, Zachary; Barber, H. Bradford
2012-10-01
Results are presented of investigations into the composition, uniformity and gamma-ray imaging performance of new ceramic scintillators with synthetic garnet structure. The ceramic scintillators were produced by a process that uses flame pyrolysis to make nanoparticles which are sintered into a ceramic and then compacted by hot isostatic compression into a transparent material. There is concern that the resulting ceramic scintillator might not have the uniformity of composition necessary for use in gamma-ray spectroscopy and gamma-ray imaging. The compositional uniformity of four samples of three ceramic scintillator types (GYGAG:Ce, GLuGAG:Ce and LuAG:Pr) was tested using an electron microprobe. It was found that all samples were uniform in elemental composition to the limit of sensitivity of the microprobe (few tenths of a percent atomic) over distance scales from ~ 1 cm to ~ 1 um. The light yield and energy resolution of all ceramic scintillator samples were mapped with a highly collimated 57Co source (122 keV) and performance was uniform at mapping scale of 0.25 mm. Good imaging performance with single gamma-ray photon detection was demonstrated for all samples using a BazookaSPECT system, and the imaging spatial resolution, measured as the FWHM of a LSF was 150 um.
Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.
2018-05-01
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Uniform batch processing using microwaves
NASA Technical Reports Server (NTRS)
Barmatz, Martin B. (Inventor); Jackson, Henry W. (Inventor)
2000-01-01
A microwave oven and microwave heating method generates microwaves within a cavity in a predetermined mode such that there is a known region of uniform microwave field. Samples placed in the region will then be heated in a relatively identical manner. Where perturbations induced by the samples are significant, samples are arranged in a symmetrical distribution so that the cumulative perturbation at each sample location is the same.
A k-Vector Approach to Sampling, Interpolation, and Approximation
NASA Astrophysics Data System (ADS)
Mortari, Daniele; Rogers, Jonathan
2013-12-01
The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Security of practical private randomness generation
NASA Astrophysics Data System (ADS)
Pironio, Stefano; Massar, Serge
2013-01-01
Measurements on entangled quantum systems necessarily yield outcomes that are intrinsically unpredictable if they violate a Bell inequality. This property can be used to generate certified randomness in a device-independent way, i.e., without making detailed assumptions about the internal working of the quantum devices used to generate the random numbers. Furthermore these numbers are also private; i.e., they appear random not only to the user but also to any adversary that might possess a perfect description of the devices. Since this process requires a small initial random seed to sample the behavior of the quantum devices and to extract uniform randomness from the raw outputs of the devices, one usually speaks of device-independent randomness expansion. The purpose of this paper is twofold. First, we point out that in most real, practical situations, where the concept of device independence is used as a protection against unintentional flaws or failures of the quantum apparatuses, it is sufficient to show that the generated string is random with respect to an adversary that holds only classical side information; i.e., proving randomness against quantum side information is not necessary. Furthermore, the initial random seed does not need to be private with respect to the adversary, provided that it is generated in a way that is independent from the measured systems. The devices, however, will generate cryptographically secure randomness that cannot be predicted by the adversary, and thus one can, given access to free public randomness, talk about private randomness generation. The theoretical tools to quantify the generated randomness according to these criteria were already introduced in S. Pironio [Nature (London)NATUAS0028-083610.1038/nature09008 464, 1021 (2010)], but the final results were improperly formulated. The second aim of this paper is to correct this inaccurate formulation and therefore lay out a precise theoretical framework for practical device-independent randomness generation.
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.
Crack surface roughness in three-dimensional random fuse networks
NASA Astrophysics Data System (ADS)
Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan
2006-08-01
Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.
Antipersistent dynamics in kinetic models of wealth exchange
NASA Astrophysics Data System (ADS)
Goswami, Sanchari; Chatterjee, Arnab; Sen, Parongama
2011-11-01
We investigate the detailed dynamics of gains and losses made by agents in some kinetic models of wealth exchange. An earlier work suggested that a walk in an abstract gain-loss space can be conceived for the agents. For models in which agents do not save, or save with uniform saving propensity, the walk has diffusive behavior. For the case in which the saving propensity λ is distributed randomly (0≤λ<1), the resultant walk showed a ballistic nature (except at a particular value of λ*≈0.47). Here we consider several other features of the walk with random λ. While some macroscopic properties of this walk are comparable to a biased random walk, at microscopic level, there are gross differences. The difference turns out to be due to an antipersistent tendency toward making a gain (loss) immediately after making a loss (gain). This correlation is in fact present in kinetic models without saving or with uniform saving as well, such that the corresponding walks are not identical to ordinary random walks. In the distributed saving case, antipersistence occurs with a simultaneous overall bias.
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Theory of Dielectric Breakdown in Randomly Inhomogeneous Materials
NASA Astrophysics Data System (ADS)
Gyure, Mark Franklin
1990-01-01
Two models of dielectric breakdown in disordered metal-insulator composites have been developed in an attempt to explain in detail the greatly reduced breakdown electric field observed in these materials. The first model is a two dimensional model in which the composite is treated as a random array of conducting cylinders embedded in an otherwise uniform dielectric background. The two dimensional samples are generated by the Monte Carlo method and a discretized version of the integral form of Laplace's equation is solved to determine the electric field in each sample. Breakdown is modeled as a quasi-static process by which one breakdown at a time occurs at the point of maximum electric field in the system. A cascade of these local breakdowns leads to complete dielectric failure of the system after which the breakdown field can be determined. A second model is developed that is similar to the first in terms of breakdown dynamics, but uses coupled multipole expansions of the electrostatic potential centered at each particle to obtain a more computationally accurate and faster solution to the problem of determining the electric field at an arbitrary point in a random medium. This new algorithm allows extension of the model to three dimensions and treats conducting spherical inclusions as well as cylinders. Successful implementation of this algorithm relies on the use of analytical forms for off-centered expansions of cylindrical and spherical harmonics. Scaling arguments similar to those used in theories of phase transitions are developed for the breakdown field and these arguments are discussed in context with other theories that have been developed to explain the break-down behavior of random resistor and fuse networks. Finally, one of the scaling arguments is used to predict the breakdown field for some samples of solid fuel rocket propellant tested at the China Lake Naval Weapons Center and is found to compare quite well with the experimentally measured breakdown fields.
Chen, Ching-Hwa; Tsaia, Perng-Jy; Lai, Chane-Yu; Peng, Ya-Lian; Soo, Jhy-Charm; Chen, Cheng-Yao; Shih, Tung-Sheng
2010-04-15
In this study, field samplings were conducted in three workplaces of a foundry plant, including the molding, demolding, and bead blasting, respectively. Three respirable aerosol samplers (including a 25-mm aluminum cyclone, nylon cyclone, and IOSH cyclone) were used side-by-side to collect samples from each selected workplace. For each collected sample, the uniformity of the deposition of respirable dusts on the filter was measured and its free silica content was determined by both the DOF XRD method and NIOSH 7500 XRD method (i.e., the reference method). A same trend in measured uniformities can be found in all selected workplaces: 25-mm aluminum cyclone>nylon cyclone>IOSH cyclone. Even for samples collected by the sampler with the highest uniformity (i.e., 25-mm aluminum cyclone), the use of the DOF XRD method would lead to the measured free silica concentrations 1.15-2.89 times in magnitude higher than that of the reference method. A new filter holder should be developed with the minimum uniformity comparable to that of NIOSH 7500 XRD method (=0.78) in the future. The use of conversion factors for correcting quartz concentrations obtained from the DOF XRD method based on the measured uniformities could be suitable for the foundry industry at this stage. 2009 Elsevier B.V. All rights reserved.
Methods of obtaining a uniform volume concentration of implanted ions
NASA Astrophysics Data System (ADS)
Reutov, V. F.
1998-05-01
Three simple practical methods of irradiation with high energy particles (>5 MeV/n), providing the conditions of obtaining a uniform volume concentration of the implanted ions in the massive samples are described in the present paper. Realization of the condition of two-sided irradiation of a plane sample during its rotation in the flux of the projectiles is the basis of the first method. The use of free air as a filter with varying absorbent ability due to the movement of the irradiated sample along ion beam brought to the atmosphere is at the basis of the second method of uniform ion alloying. The third method of obtaining a uniform volume concentration of the implanted ions in a massive sample consists of sample irradiation through the absorbent filter in the shape of a foil curved according to the parabolic law moving along its surface. The first method is the most effective for obtaining a great number of the samples, for mechanical tests, for example, the second one - for irradiation in different gaseous media, the third one - for obtaining high concentration of the implanted ions under controlled (regulated) thermal and deformation conditions.
NASA Astrophysics Data System (ADS)
Mett, Richard R.; Froncisz, Wojciech; Hyde, James S.
2001-11-01
This article is concerned with cylindrical transverse electric TE011 and rectangular TE102 microwave cavity resonators commonly used in electron paramagnetic resonance (EPR) spectroscopy. In the cylindrical mode geometry considered here, the sample is along the z axis of the cylinder, dielectric disks of 1/4 wavelength thickness are placed at each end wall, and the diameter of the cylinder is set at the cutoff condition for propagation of microwave energy in a cylindrical waveguide at the desired microwave frequency. The microwave magnetic field is exactly uniform along the sample in the region between the dielectric disks and the resonant frequency is independent of the length of the cylinder without limit. The rectangular TE102 geometry is analogous, but here the microwave magnetic field is exactly uniform in a plane. A uniform microwave field along a line sample is highly advantageous in EPR spectroscopy compared with the usual sinusoidal variation, and these geometries are called "uniform field" modes. Extensive theoretical analysis as well as finite element calculation of field patterns are presented. The perturbation of field patterns caused by sample insertion as functions of the overall length of the resonator and diameter of the sample is analyzed. The article is intended to provide a basis for design of practical structures in the range of 10 to 100 GHz.
Yashchuk, V. V.; Fischer, P. J.; Chan, E. R.; ...
2015-12-09
We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope's MTF, tests with the BPRML sample can be used to fine tune the instrument's focal distance. Finally, our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less
Averaging in SU(2) open quantum random walk
NASA Astrophysics Data System (ADS)
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
Dissolved oxygen as an indicator of bioavailable dissolved organic carbon in groundwater
Chapelle, Francis H.; Bradley, Paul M.; McMahon, Peter B.; Kaiser, Karl; Benner, Ron
2012-01-01
Concentrations of dissolved oxygen (DO) plotted vs. dissolved organic carbon (DOC) in groundwater samples taken from a coastal plain aquifer of South Carolina (SC) showed a statistically significant hyperbolic relationship. In contrast, DO-DOC plots of groundwater samples taken from the eastern San Joaquin Valley of California (CA) showed a random scatter. It was hypothesized that differences in the bioavailability of naturally occurring DOC might contribute to these observations. This hypothesis was examined by comparing nine different biochemical indicators of DOC bioavailability in groundwater sampled from these two systems. Concentrations of DOC, total hydrolysable neutral sugars (THNS), total hydrolysable amino acids (THAA), mole% glycine of THAA, initial bacterial cell counts, bacterial growth rates, and carbon dioxide production/consumption were greater in SC samples relative to CA samples. In contrast, the mole% glucose of THNS and the aromaticity (SUVA254) of DOC was greater in CA samples. Each of these indicator parameters were observed to change with depth in the SC system in a manner consistent with active biodegradation. These results are uniformly consistent with the hypothesis that the bioavailability of DOC is greater in SC relative to CA groundwater samples. This, in turn, suggests that the presence/absence of a hyperbolic DO-DOC relationship may be a qualitative indicator of relative DOC bioavailability in groundwater systems.
A Science and Risk-Based Pragmatic Methodology for Blend and Content Uniformity Assessment.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Doshi, Chetan
2018-04-01
This paper describes a pragmatic approach that can be applied in assessing powder blend and unit dosage uniformity of solid dose products at Process Design, Process Performance Qualification, and Continued/Ongoing Process Verification stages of the Process Validation lifecycle. The statistically based sampling, testing, and assessment plan was developed due to the withdrawal of the FDA draft guidance for industry "Powder Blends and Finished Dosage Units-Stratified In-Process Dosage Unit Sampling and Assessment." This paper compares the proposed Grouped Area Variance Estimate (GAVE) method with an alternate approach outlining the practicality and statistical rationalization using traditional sampling and analytical methods. The approach is designed to fit solid dose processes assuring high statistical confidence in both powder blend uniformity and dosage unit uniformity during all three stages of the lifecycle complying with ASTM standards as recommended by the US FDA.
Webster, Linda; Eisenberg, Anna; Bohnert, Amy S B; Kleinberg, Felicia; Ilgen, Mark A
2012-01-01
The objective of this study was to examine risk assessment practices for suicide and unintentional overdose to inform ongoing care in substance use disorder clinics. Focus groups were conducted via telephone among a random sample of treatment providers (N = 19) from Veterans Health Administration substance use disorder clinics across the nation. Themes were coded by research staff. Treatment providers reported consistent and clear guidelines for risk assessment of suicide among patients. Unintentional overdose questions elicited dissimilar responses which indicated a lack of cohesion and uniformity in risk assessment practices across clinics. Suicide risk assessment protocols are cohesively implemented by treatment providers. Unintentional overdose risk, however, may be less consistently assessed in clinics.
Modelling heat conduction in polycrystalline hexagonal boron-nitride films
Mortazavi, Bohayra; Pereira, Luiz Felipe C.; Jiang, Jin-Wu; Rabczuk, Timon
2015-01-01
We conducted extensive molecular dynamics simulations to investigate the thermal conductivity of polycrystalline hexagonal boron-nitride (h-BN) films. To this aim, we constructed large atomistic models of polycrystalline h-BN sheets with random and uniform grain configuration. By performing equilibrium molecular dynamics (EMD) simulations, we investigated the influence of the average grain size on the thermal conductivity of polycrystalline h-BN films at various temperatures. Using the EMD results, we constructed finite element models of polycrystalline h-BN sheets to probe the thermal conductivity of samples with larger grain sizes. Our multiscale investigations not only provide a general viewpoint regarding the heat conduction in h-BN films but also propose that polycrystalline h-BN sheets present high thermal conductivity comparable to monocrystalline sheets. PMID:26286820
Adapting radiotherapy to hypoxic tumours
NASA Astrophysics Data System (ADS)
Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag
2006-10-01
In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.
Kinetic market models with single commodity having price fluctuations
NASA Astrophysics Data System (ADS)
Chatterjee, A.; Chakrabarti, B. K.
2006-12-01
We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.
Tomographical imaging using uniformly redundant arrays
NASA Technical Reports Server (NTRS)
Cannon, T. M.; Fenimore, E. E.
1979-01-01
An investigation is conducted of the behavior of two types of uniformly redundant array (URA) when used for close-up imaging. One URA pattern is a quadratic residue array whose characteristics for imaging planar sources have been simulated by Fenimore and Cannon (1978), while the second is based on m sequences that have been simulated by Gunson and Polychronopulos (1976) and by MacWilliams and Sloan (1976). Close-up imaging is necessary in order to obtain depth information for tomographical purposes. The properties of the two URA patterns are compared with a random array of equal open area. The goal considered in the investigation is to determine if a URA pattern exists which has the desirable defocus properties of the random array while maintaining artifact-free image properties for in-focus objects.
Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling
NASA Astrophysics Data System (ADS)
Bierhorst, Peter; Shalm, Lynden K.; Mink, Alan; Jordan, Stephen; Liu, Yi-Kai; Rommal, Andrea; Glancy, Scott; Christensen, Bradley; Nam, Sae Woo; Knill, Emanuel
Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment to obtain data containing randomness that cannot be predicted by any theory that does not also allow the sending of signals faster than the speed of light. To certify and quantify the randomness, we develop a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtain 256 new random bits, uniform to within 10- 3 .
Wear behavioral study of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy at constant load
NASA Astrophysics Data System (ADS)
Harlapur, M. D.; Sondur, D. G.; Akkimardi, V. G.; Mallapur, D. G.
2018-04-01
In the current study, the wear behavior of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy has been investigated. Microstructure, SEM and EDS results confirm the presence of different intermetallic and their effects on wear properties of Al25Mg2Si2Cu4Ni alloy in as cast as well as aged condition. Alloying main elements like Si, Cu, Mg and Ni partly dissolve in the primary α-Al matrix and to some amount present in the form of intermetallic phases. SEM structure of as cast alloy shows blocks of Mg2Si which is at random distributed in the aluminium matrix. Precipitates of Al2Cu in the form of Chinese script are also observed. Also `Q' phase (Al-Si-Cu-Mg) be distributed uniformly into the aluminium matrix. Few coarsened platelets of Ni are seen. In case of 7 hr homogenized samples blocks of Mg2Si get rounded at the corners, Platelets of Ni get fragmented and distributed uniformly in the aluminium matrix. Results show improved volumetric wear resistance and reduced coefficient of friction after homogenizing heat treatment.
Quality of anthelminthic medicines available in Jimma Ethiopia.
Belew, Sileshi; Suleman, Sultan; Wynendaele, Evelien; D'Hondt, Matthias; Kosgei, Anne; Duchateau, Luc; De Spiegeleer, Bart
2018-01-01
Soil-transmitted helminthiasis and schistosomiasis are major public health problems in Ethiopia. Mass deworming of at-risk population using a single dose administration of 400mg albendazole (ABZ) or 500mg mebendazole (MBZ) for treatment of common intestinal worms and 40mg of praziquantel (PZQ) per kg body weight for treatment of schistosomiasis is one of the strategies recommended by World Health Organization (WHO) in order to control the morbidity of soil-transmitted helminthiasis and schistosomiasis. Since storage condition, climate, way of transportation and distribution route could all affect the quality of medicines, regular assessment by surveys is very critical to ensure the therapeutic outcome, to minimize risk of toxicity to the patient and resistance of parasites. Therefore, this study was conducted to assess the pharmaceutical quality of ABZ, MBZ and PZQ tablet brands commonly available in Jimma town (south west Ethiopia). Retail pharmacies (n=10) operating in Jimma town were selected using simple random sampling method. Samples of anthelminthic medicines available in the selected pharmacies were collected. Sample information was recorded and encompassed trade name, active ingredient name, manufacturer's name and full address, labeled medicine strength, dosage form, number of units per container, dosage statement, batch/lot number, manufacturing and expiry dates, storage information and presence of leaflets/package insert. Moreover, a first visual inspection was performed encompassing uniformity of color, uniformity of size, breaks, cracks, splits, embedded surface spots or visual contaminations. Finally, physico-chemical quality attributes investigated encompassed mass uniformity, quantity of active pharmaceutical ingredient (API), disintegration and dissolution, all following Pharmacopoeial test methods The physical characteristics of dosage form, packaging and labeling information of all samples complied with criteria given in the WHO checklists. The mass uniformity of tablets of each brand of ABZ, MBZ and PZQ complied with the pharmacopoeial specification limits, i.e no more than 2 individual masses >5% of average tablet weight, and none deviate by more than 10%. The quantity of APIs in all investigated tablet brands were within the 90-110% label claim (l.c.) limits, ranging between 95.05 and 110.09% l.c. Disintegration times were in line with the pharmacopoeial specification limit for immediate release (IR) tablets, ranging between 0.5 and 13min. However, the dissolution results (mean±SD, n=6) of one ABZ brand (i.e. Wormin ® , Q=59.21±0.99% at 30min) and two PZQ brands (i.e. Bermoxel ® , Q=63.43%±0.7 and Distocide ® , Q=62.43%±1.67, at 75min) showed poor dissolution, failing the United States Pharmacopoeia (USP) dissolution specification limit. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Meng, Su; Chen, Jie; Sun, Jian
2017-10-01
This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.
Inference about density and temporary emigration in unmarked populations
Chandler, Richard B.; Royle, J. Andrew; King, David I.
2011-01-01
Few species are distributed uniformly in space, and populations of mobile organisms are rarely closed with respect to movement, yet many models of density rely upon these assumptions. We present a hierarchical model allowing inference about the density of unmarked populations subject to temporary emigration and imperfect detection. The model can be fit to data collected using a variety of standard survey methods such as repeated point counts in which removal sampling, double-observer sampling, or distance sampling is used during each count. Simulation studies demonstrated that parameter estimators are unbiased when temporary emigration is either "completely random" or is determined by the size and location of home ranges relative to survey points. We also applied the model to repeated removal sampling data collected on Chestnut-sided Warblers (Dendroica pensylvancia) in the White Mountain National Forest, USA. The density estimate from our model, 1.09 birds/ha, was similar to an estimate of 1.11 birds/ha produced by an intensive spot-mapping effort. Our model is also applicable when processes other than temporary emigration affect the probability of being available for detection, such as in studies using cue counts. Functions to implement the model have been added to the R package unmarked.
Method for improving instrument response
Hahn, David W.; Hencken, Kenneth R.; Johnsen, Howard A.; Flower, William L.
2000-01-01
This invention pertains generally to a method for improving the accuracy of particle analysis under conditions of discrete particle loading and particularly to a method for improving signal-to-noise ratio and instrument response in laser spark spectroscopic analysis of particulate emissions. Under conditions of low particle density loading (particles/m.sup.3) resulting from low overall metal concentrations and/or large particle size uniform sampling can not be guaranteed. The present invention discloses a technique for separating laser sparks that arise from sample particles from those that do not; that is, a process for systematically "gating" the instrument response arising from "sampled" particles from those responses which do not, is dislosed as a solution to his problem. The disclosed approach is based on random sampling combined with a conditional analysis of each pulse. A threshold value is determined for the ratio of the intensity of a spectral line for a given element to a baseline region. If the threshold value is exceeded, the pulse is classified as a "hit" and that data is collected and an average spectrum is generated from an arithmetic average of "hits". The true metal concentration is determined from the averaged spectrum.
Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge
2015-01-01
The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.
Note: The design of thin gap chamber simulation signal source based on field programmable gate array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Kun; Wang, Xu; Li, Feng
The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.
NASA Astrophysics Data System (ADS)
Zechner, A.; Stock, M.; Kellner, D.; Ziegler, I.; Keuschnigg, P.; Huber, P.; Mayer, U.; Sedlmayer, F.; Deutschmann, H.; Steininger, P.
2016-11-01
Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager’s deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner’s geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.
Zechner, A; Stock, M; Kellner, D; Ziegler, I; Keuschnigg, P; Huber, P; Mayer, U; Sedlmayer, F; Deutschmann, H; Steininger, P
2016-11-21
Image guidance during highly conformal radiotherapy requires accurate geometric calibration of the moving components of the imager. Due to limited manufacturing accuracy and gravity-induced flex, an x-ray imager's deviation from the nominal geometrical definition has to be corrected for. For this purpose a ball bearing phantom applicable for nine degrees of freedom (9-DOF) calibration of a novel cone-beam computed tomography (CBCT) scanner was designed and validated. In order to ensure accurate automated marker detection, as many uniformly distributed markers as possible should be used with a minimum projected inter-marker distance of 10 mm. Three different marker distributions on the phantom cylinder surface were simulated. First, a fixed number of markers are selected and their coordinates are randomly generated. Second, the quasi-random method is represented by setting a constraint on the marker distances in the projections. The third approach generates the ball coordinates helically based on the Golden ratio, ϕ. Projection images of the phantom incorporating the CBCT scanner's geometry were simulated and analysed with respect to uniform distribution and intra-marker distance. Based on the evaluations a phantom prototype was manufactured and validated by a series of flexmap calibration measurements and analyses. The simulation with randomly distributed markers as well as the quasi-random approach showed an insufficient uniformity of the distribution over the detector area. The best compromise between uniform distribution and a high packing fraction of balls is provided by the Golden section approach. A prototype was manufactured accordingly. The phantom was validated for 9-DOF geometric calibrations of the CBCT scanner with independently moveable source and detector arms. A novel flexmap calibration phantom intended for 9-DOF was developed. The ball bearing distribution based on the Golden section was found to be highly advantageous. The phantom showed satisfying results for calibrations of the CBCT scanner and provides the basis for further flexmap correction and reconstruction developments.
Field comparison of analytical results from discrete-depth ground water samplers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemo, D.A.; Delfino, T.A.; Gallinatti, J.D.
1995-07-01
Discrete-depth ground water samplers are used during environmental screening investigations to collect ground water samples in lieu of installing and sampling monitoring wells. Two of the most commonly used samplers are the BAT Enviroprobe and the QED HydroPunch I, which rely on differing sample collection mechanics. Although these devices have been on the market for several years, it was unknown what, if any, effect the differences would have on analytical results for ground water samples containing low to moderate concentrations of chlorinated volatile organic compounds (VOCs). This study investigated whether the discrete-depth ground water sampler used introduces statistically significant differencesmore » in analytical results. The goal was to provide a technical basis for allowing the two devices to be used interchangeably during screening investigations. Because this study was based on field samples, it included several sources of potential variability. It was necessary to separate differences due to sampler type from variability due to sampling location, sample handling, and laboratory analytical error. To statistically evaluate these sources of variability, the experiment was arranged in a nested design. Sixteen ground water samples were collected from eight random locations within a 15-foot by 15-foot grid. The grid was located in an area where shallow ground water was believed to be uniformly affected by VOCs. The data were evaluated using analysis of variance.« less
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Uniform apparent contrast noise: A picture of the noise of the visual contrast detection system
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Watson, A. B.
1984-01-01
A picture which is a sample of random contrast noise is generated. The noise amplitude spectrum in each region of the picture is inversely proportional to spatial frequency contrast sensitivity for that region, assuming the observer fixates the center of the picture and is the appropriate distance from it. In this case, the picture appears to have approximately the same contrast everywhere. To the extent that contrast detection thresholds are determined by visual system noise, this picture can be regarded as a picture of the noise of that system. There is evidence that, at different eccentricities, contrast sensitivity functions differ only by a magnification factor. The picture was generated by filtering a sample of white noise with a filter whose frequency response is inversely proportional to foveal contrast sensitivity. It was then stretched by a space-varying magnification function. The picture summmarizes a noise linear model of detection and discrimination of contrast signals by referring the model noise to the input picture domain.
The North American Breeding Bird Survey
Bystrak, D.; Ralph, C. John; Scott, J. Michael
1981-01-01
A brief history of the North American Breeding Bird Survey (BBS) and a discussion of the technique are presented. The approximately 2000 random roadside routes conducted yearly during the breeding season throughout North America produce an enormous bank of data on distribution and abundance of breeding birds with great potential use. Data on about one million total birds of 500 species per year are on computer tape to facilitate accessibility and are available to any serious investigator. The BBS includes the advantages of wide geographic coverage, sampling of most habitat types, standardization of data collection, and a relatively simple format. The Survey is limited by placement of roads (e.g., marshes and rugged mountainous areas are not well sampled), traffic noise interference in some cases and preference of some bird species for roadside habitats. These and other problems and biases of the BBS are discussed. The uniformity of the technique allows for detecting changes in populations and for creation of maps of relative abundance. Examples of each are presented.
NASA Astrophysics Data System (ADS)
Gui, Xulong; Luo, Xiaobing; Wang, Xiaoping; Liu, Sheng
2015-12-01
Micro-electrical-mechanical system (MEMS) has become important for many industries such as automotive, home appliance, portable electronics, especially with the emergence of Internet of Things. Volume testing with temperature compensation has been essential in order to provide MEMS based sensors with repeatability, consistency, reliability, and durability, but low cost. Particularly, in the temperature calibration test, temperature uniformity of thermal cycling based calibration chamber becomes more important for obtaining precision sensors, as each sensor is different before the calibration. When sensor samples are loaded into the chamber, we usually open the door of the chamber, then place fixtures into chamber and mount the samples on the fixtures. These operations may affect temperature uniformity in the chamber. In order to study the influencing factors of sample-loading on the temperature uniformity in the chamber during calibration testing, numerical simulation work was conducted first. Temperature field and flow field were simulated in empty chamber, chamber with open door, chamber with samples, and chamber with fixtures, respectively. By simulation, it was found that opening chamber door, sample size and number of fixture layers all have effects on flow field and temperature field. By experimental validation, it was found that the measured temperature value was consistent with the simulated temperature value.
Chen, Zheng; Liu, Liu; Mu, Lin
2017-05-03
In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less
Knowledge-based nonuniform sampling in multidimensional NMR.
Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C
2011-07-01
The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.
Heterogeneity, histological features and DNA ploidy in oral carcinoma by image-based analysis.
Diwakar, N; Sperandio, M; Sherriff, M; Brown, A; Odell, E W
2005-04-01
Oral squamous carcinomas appear heterogeneous on DNA ploidy analysis. However, this may be partly a result of sample dilution or the detection limit of techniques. The aim of this study was to determine whether oral squamous carcinomas are heterogeneous for ploidy status using image-based ploidy analysis and to determine whether ploidy status correlates with histological parameters. Multiple samples from 42 oral squamous carcinomas were analysed for DNA ploidy using an image-based system and scored for histological parameters. 22 were uniformly aneuploid, 1 uniformly tetraploid and 3 uniformly diploid. 16 appeared heterogeneous but only 8 appeared to be genuinely heterogeneous when minor ploidy histogram peaks were taken into account. Ploidy was closely related to nuclear pleomorphism but not differentiation. Sample variation, detection limits and diagnostic criteria account for much of the ploidy heterogeneity observed. Confident diagnosis of diploid status in an oral squamous cell carcinoma requires a minimum of 5 samples.
[Research of the surface oxide film on anodizing Ni-Cr porcelain alloy].
Zhu, Song; Sun, Hong-Chen; Zhang, Jing-Wei; Li, Zong-Hui
2006-08-01
To study the shape, thickness and oxide percentage of major metal element of oxide film on Ni-Cr porcelain alloy after anodizing pretreatment. 10 samples were made and divided into 2 groups at random. Then after surface pretreatment, the oxide films of two samples of each group were analyzed using electronic scanning microscope. The rest 3 samples were measured by X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES). Lightly selective solution appeared because the different component parts of the alloy have dissimilar electrode, whose dissolve velocity were quite unlike. The sample's metal surface expanded, so the mechanical interlocking of porcelain and metal increased bond strength. The thickness of oxide film was 1.72 times of the control samples. The oxide percentage of major metal elements such as Cr, Ni and Mo were higher, especially Cr. It initially involved the formation of a thin oxide bound to the alloy and second, the ability of the formed oxide to saturate the porcelain, completing the chemical bond of porcelain to metal. The method of anodizing Ni-Cr porcelain alloy can easily control the forming of oxide film which was thin and its surface pattern was uniform. It is repeated and a good method of surface pretreatment before firing cycle.
On the efficiency of a randomized mirror descent algorithm in online optimization problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.
2015-04-01
A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Iwakoshi, Takehisa; Hirota, Osamu
2014-10-01
This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.
Hofer, Jeffrey D; Rauk, Adam P
2017-02-01
The purpose of this work was to develop a straightforward and robust approach to analyze and summarize the ability of content uniformity data to meet different criteria. A robust Bayesian statistical analysis methodology is presented which provides a concise and easily interpretable visual summary of the content uniformity analysis results. The visualization displays individual batch analysis results and shows whether there is high confidence that different content uniformity criteria could be met a high percentage of the time in the future. The 3 tests assessed are as follows: (a) United States Pharmacopeia Uniformity of Dosage Units <905>, (b) a specific ASTM E2810 Sampling Plan 1 criterion to potentially be used for routine release testing, and (c) another specific ASTM E2810 Sampling Plan 2 criterion to potentially be used for process validation. The approach shown here could readily be used to create similar result summaries for other potential criteria. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
The influence of statistical properties of Fourier coefficients on random Gaussian surfaces.
de Castro, C P; Luković, M; Andrade, R F S; Herrmann, H J
2017-05-16
Many examples of natural systems can be described by random Gaussian surfaces. Much can be learned by analyzing the Fourier expansion of the surfaces, from which it is possible to determine the corresponding Hurst exponent and consequently establish the presence of scale invariance. We show that this symmetry is not affected by the distribution of the modulus of the Fourier coefficients. Furthermore, we investigate the role of the Fourier phases of random surfaces. In particular, we show how the surface is affected by a non-uniform distribution of phases.
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field
NASA Astrophysics Data System (ADS)
Borelli, M. E. S.; Carneiro, C. E. I.
1996-02-01
We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.
Measured acoustic properties of variable and low density bulk absorbers
NASA Technical Reports Server (NTRS)
Dahl, M. D.; Rice, E. J.
1985-01-01
Experimental data were taken to determine the acoustic absorbing properties of uniform low density and layered variable density samples using a bulk absober with a perforated plate facing to hold the material in place. In the layered variable density case, the bulk absorber was packed such that the lowest density layer began at the surface of the sample and progressed to higher density layers deeper inside. The samples were placed in a rectangular duct and measurements were taken using the two microphone method. The data were used to calculate specific acoustic impedances and normal incidence absorption coefficients. Results showed that for uniform density samples the absorption coefficient at low frequencies decreased with increasing density and resonances occurred in the absorption coefficient curve at lower densities. These results were confirmed by a model for uniform density bulk absorbers. Results from layered variable density samples showed that low frequency absorption was the highest when the lowest density possible was packed in the first layer near the exposed surface. The layers of increasing density within the sample had the effect of damping the resonances.
Role of work uniform in alleviating perceptual strain among construction workers.
Yang, Yang; Chan, Albert Ping-Chuen
2017-02-07
This study aims to examine the benefits of wearing a new construction work uniform in real-work settings. A field experiment with a randomized assignment of an intervention group to a newly designed uniform and a control group to a commercially available trade uniform was executed. A total of 568 sets of physical, physiological, perceptual, and microclimatological data were obtained. A linear mixed-effects model (LMM) was built to examine the cause-effect relationship between the Perceptual Strain Index (PeSI) and heat stressors including wet bulb globe temperature (WBGT), estimated workload (relative heart rate), exposure time, trade, workplace, and clothing type. An interaction effect between clothing and trade revealed that perceptual strain of workers across four trades was significantly alleviated by 1.6-6.3 units in the intervention group. Additionally, the results of a questionnaire survey on assessing the subjective sensations on the two uniforms indicated that wearing comfort was improved by 1.6-1.8 units when wearing the intervention type. This study not only provides convincing evidences on the benefits of wearing the newly designed work uniform in reducing perceptual strain but also heightens the value of the field experiment in heat stress intervention studies.
Role of work uniform in alleviating perceptual strain among construction workers
YANG, Yang; CHAN, Albert Ping-chuen
2016-01-01
This study aims to examine the benefits of wearing a new construction work uniform in real-work settings. A field experiment with a randomized assignment of an intervention group to a newly designed uniform and a control group to a commercially available trade uniform was executed. A total of 568 sets of physical, physiological, perceptual, and microclimatological data were obtained. A linear mixed-effects model (LMM) was built to examine the cause-effect relationship between the Perceptual Strain Index (PeSI) and heat stressors including wet bulb globe temperature (WBGT), estimated workload (relative heart rate), exposure time, trade, workplace, and clothing type. An interaction effect between clothing and trade revealed that perceptual strain of workers across four trades was significantly alleviated by 1.6–6.3 units in the intervention group. Additionally, the results of a questionnaire survey on assessing the subjective sensations on the two uniforms indicated that wearing comfort was improved by 1.6–1.8 units when wearing the intervention type. This study not only provides convincing evidences on the benefits of wearing the newly designed work uniform in reducing perceptual strain but also heightens the value of the field experiment in heat stress intervention studies. PMID:27666953
Miller, Arthur L; Drake, Pamela L; Murphy, Nathaniel C; Cauda, Emanuele G; LeBouf, Ryan F; Markevicius, Gediminas
Miners are exposed to silica-bearing dust which can lead to silicosis, a potentially fatal lung disease. Currently, airborne silica is measured by collecting filter samples and sending them to a laboratory for analysis. Since this may take weeks, a field method is needed to inform decisions aimed at reducing exposures. This study investigates a field-portable Fourier transform infrared (FTIR) method for end-of-shift (EOS) measurement of silica on filter samples. Since the method entails localized analyses, spatial uniformity of dust deposition can affect accuracy and repeatability. The study, therefore, assesses the influence of radial deposition uniformity on the accuracy of the method. Using laboratory-generated Minusil and coal dusts and three different types of sampling systems, multiple sets of filter samples were prepared. All samples were collected in pairs to create parallel sets for training and validation. Silica was measured by FTIR at nine locations across the face of each filter and the data analyzed using a multiple regression analysis technique that compared various models for predicting silica mass on the filters using different numbers of "analysis shots." It was shown that deposition uniformity is independent of particle type (kaolin vs. silica), which suggests the role of aerodynamic separation is negligible. Results also reflected the correlation between the location and number of shots versus the predictive accuracy of the models. The coefficient of variation (CV) for the models when predicting mass of validation samples was 4%-51% depending on the number of points analyzed and the type of sampler used, which affected the uniformity of radial deposition on the filters. It was shown that using a single shot at the center of the filter yielded predictivity adequate for a field method, (93% return, CV approximately 15%) for samples collected with 3-piece cassettes.
Using machine learning to examine medication adherence thresholds and risk of hospitalization.
Lo-Ciganic, Wei-Hsuan; Donohue, Julie M; Thorpe, Joshua M; Perera, Subashan; Thorpe, Carolyn T; Marcum, Zachary A; Gellad, Walid F
2015-08-01
Quality improvement efforts are frequently tied to patients achieving ≥80% medication adherence. However, there is little empirical evidence that this threshold optimally predicts important health outcomes. To apply machine learning to examine how adherence to oral hypoglycemic medications is associated with avoidance of hospitalizations, and to identify adherence thresholds for optimal discrimination of hospitalization risk. A retrospective cohort study of 33,130 non-dual-eligible Medicaid enrollees with type 2 diabetes. We randomly selected 90% of the cohort (training sample) to develop the prediction algorithm and used the remaining (testing sample) for validation. We applied random survival forests to identify predictors for hospitalization and fit survival trees to empirically derive adherence thresholds that best discriminate hospitalization risk, using the proportion of days covered (PDC). Time to first all-cause and diabetes-related hospitalization. The training and testing samples had similar characteristics (mean age, 48 y; 67% female; mean PDC=0.65). We identified 8 important predictors of all-cause hospitalizations (rank in order): prior hospitalizations/emergency department visit, number of prescriptions, diabetes complications, insulin use, PDC, number of prescribers, Elixhauser index, and eligibility category. The adherence thresholds most discriminating for risk of all-cause hospitalization varied from 46% to 94% according to patient health and medication complexity. PDC was not predictive of hospitalizations in the healthiest or most complex patient subgroups. Adherence thresholds most discriminating of hospitalization risk were not uniformly 80%. Machine-learning approaches may be valuable to identify appropriate patient-specific adherence thresholds for measuring quality of care and targeting nonadherent patients for intervention.
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.
El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein-DNA interfaces.
Traffic signal inventory project
DOT National Transportation Integrated Search
2001-06-01
The purpose of this study was to determine the level of compliance with the "Manual on Uniform Traffic Control Devices" (MUTCD) and other industry standards of traffic signals on the Iowa state highway system. Signals were randomly selected in cities...
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or themore » giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.« less
NASA Astrophysics Data System (ADS)
Allwood, D. A.; Perera, I. K.; Perkins, J.; Dyer, P. E.; Oldershaw, G. A.
1996-11-01
Highly uniform thin films of samples for matrix-assisted laser desorption/ionisation (MALDI) have been fabricated by depositing a saturated solution of ferulic acid onto a soda lime glass disc and crushing with polished aluminium, the films covering large areas of the substrate and having a thickness between 45-60 μm. The effects that different substrates and crushing materials as well as sample concentration and sample recrystallisation have on these films has been examined by scanning electron microscopy. Such films have been shown to have a lower threshold fluence for matrix ion detection than standard dried-droplet samples, the reduction being approximately 15% for three of the five matrices analysed. An explanation for this is proposed in terms of crushed samples possessing a greater average energy per unit volume coupled to them by the laser due to their improved surface uniformity. Furthermore, samples that are dried at refrigerated temperatures (˜ 2.25°C) are shown to have a much improved macroscopic uniformity over samples dried at room temperature. Refrigerated and crushed MALDI samples yield analyte ions with good spot-to-spot and pulse-to-pulse reproducibility and both preparation steps appear to improve the resolution of spectra obtained with a time-of-flight mass spectrometer.
Impact of Beamforming on the Path Connectivity in Cognitive Radio Ad Hoc Networks
Dung, Le The; Hieu, Tran Dinh; Choi, Seong-Gon; Kim, Byung-Seo; An, Beongku
2017-01-01
This paper investigates the impact of using directional antennas and beamforming schemes on the connectivity of cognitive radio ad hoc networks (CRAHNs). Specifically, considering that secondary users use two kinds of directional antennas, i.e., uniform linear array (ULA) and uniform circular array (UCA) antennas, and two different beamforming schemes, i.e., randomized beamforming and center-directed to communicate with each other, we study the connectivity of all combination pairs of directional antennas and beamforming schemes and compare their performances to those of omnidirectional antennas. The results obtained in this paper show that, compared with omnidirectional transmission, beamforming transmission only benefits the connectivity when the density of secondary user is moderate. Moreover, the combination of UCA and randomized beamforming scheme gives the highest path connectivity in all evaluating scenarios. Finally, the number of antenna elements and degree of path loss greatly affect path connectivity in CRAHNs. PMID:28346377
Xie, Shouyi; Ouyang, Zi; Jia, Baohua; Gu, Min
2013-05-06
Metal nanowire networks are emerging as next generation transparent electrodes for photovoltaic devices. We demonstrate the application of random silver nanowire networks as the top electrode on crystalline silicon wafer solar cells. The dependence of transmittance and sheet resistance on the surface coverage is measured. Superior optical and electrical properties are observed due to the large-size, highly-uniform nature of these networks. When applying the nanowire networks on the solar cells with an optimized two-step annealing process, we achieved as large as 19% enhancement on the energy conversion efficiency. The detailed analysis reveals that the enhancement is mainly caused by the improved electrical properties of the solar cells due to the silver nanowire networks. Our result reveals that this technology is a promising alternative transparent electrode technology for crystalline silicon wafer solar cells.
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Cheng-zhuang; Li, Jing-yuan; Fang, Zhi
2018-02-01
In ferritic stainless steels, a significant non-uniform recrystallization orientation and a substantial texture gradient usually occur, which can degrade the ridging resistance of the final sheets. To improve the homogeneity of the recrystallization orientation and reduce the texture gradient in ultra-purified 17%Cr ferritic stainless steel, in this work, we performed conventional and asymmetric rolling processes and conducted macro and micro-texture analyses to investigate texture evolution under different cold-rolling conditions. In the conventional rolling specimens, we observed that the deformation was not uniform in the thickness direction, whereas there was homogeneous shear deformation in the asymmetric rolling specimens as well as the formation of uniform recrystallized grains and random orientation grains in the final annealing sheets. As such, the ridging resistance of the final sheets was significantly improved by employing the asymmetric rolling process. This result indicates with certainty that the texture gradient and orientation inhomogeneity can be attributed to non-uniform deformation, whereas the uniform orientation gradient in the thickness direction is explained by the increased number of shear bands obtained in the asymmetric rolling process.
Meslot, Carine; Gauchet, Aurélie; Allenet, Benoît; François, Olivier; Hagger, Martin S
2016-01-01
Interventions to assist individuals in initiating and maintaining regular participation in physical activity are not always effective. Psychological and behavioral theories advocate the importance of both motivation and volition in interventions to change health behavior. Interventions adopting self-regulation strategies that foster motivational and volitional components may, therefore, have utility in promoting regular physical activity participation. We tested the efficacy of an intervention adopting motivational (mental simulation) and volitional (implementation intentions) components to promote a regular physical activity in two studies. Study 1 adopted a cluster randomized design in which participants ( n = 92) were allocated to one of three conditions: mental simulation plus implementation intention, implementation intention only, or control. Study 2 adopted a 2 (mental simulation vs. no mental simulation) × 2 (implementation intention vs. no implementation intention) randomized controlled design in which fitness center attendees ( n = 184) were randomly allocated one of four conditions: mental simulation only, implementation intention only, combined, or control. Physical activity behavior was measured by self-report (Study 1) or fitness center attendance (Study 2) at 4- (Studies 1 and 2) and 19- (Study 2 only) week follow-up periods. Findings revealed no statistically significant main or interactive effects of the mental simulation and implementation intention conditions on physical activity outcomes in either study. Findings are in contrast to previous research which has found pervasive effects for both intervention strategies. Findings are discussed in light of study limitations including the relatively small sample sizes, particularly for Study 1, deviations in the operationalization of the intervention components from previous research and the lack of a prompt for a goal intention. Future research should focus on ensuring uniformity in the format of the intervention components, test the effects of each component alone and in combination using standardized measures across multiple samples, and systematically explore effects of candidate moderators.
Meslot, Carine; Gauchet, Aurélie; Allenet, Benoît; François, Olivier; Hagger, Martin S.
2016-01-01
Interventions to assist individuals in initiating and maintaining regular participation in physical activity are not always effective. Psychological and behavioral theories advocate the importance of both motivation and volition in interventions to change health behavior. Interventions adopting self-regulation strategies that foster motivational and volitional components may, therefore, have utility in promoting regular physical activity participation. We tested the efficacy of an intervention adopting motivational (mental simulation) and volitional (implementation intentions) components to promote a regular physical activity in two studies. Study 1 adopted a cluster randomized design in which participants (n = 92) were allocated to one of three conditions: mental simulation plus implementation intention, implementation intention only, or control. Study 2 adopted a 2 (mental simulation vs. no mental simulation) × 2 (implementation intention vs. no implementation intention) randomized controlled design in which fitness center attendees (n = 184) were randomly allocated one of four conditions: mental simulation only, implementation intention only, combined, or control. Physical activity behavior was measured by self-report (Study 1) or fitness center attendance (Study 2) at 4- (Studies 1 and 2) and 19- (Study 2 only) week follow-up periods. Findings revealed no statistically significant main or interactive effects of the mental simulation and implementation intention conditions on physical activity outcomes in either study. Findings are in contrast to previous research which has found pervasive effects for both intervention strategies. Findings are discussed in light of study limitations including the relatively small sample sizes, particularly for Study 1, deviations in the operationalization of the intervention components from previous research and the lack of a prompt for a goal intention. Future research should focus on ensuring uniformity in the format of the intervention components, test the effects of each component alone and in combination using standardized measures across multiple samples, and systematically explore effects of candidate moderators. PMID:27899904
Non-Uniform Sampling and J-UNIO Automation for Efficient Protein NMR Structure Determination.
Didenko, Tatiana; Proudfoot, Andrew; Dutta, Samit Kumar; Serrano, Pedro; Wüthrich, Kurt
2015-08-24
High-resolution structure determination of small proteins in solution is one of the big assets of NMR spectroscopy in structural biology. Improvements in the efficiency of NMR structure determination by advances in NMR experiments and automation of data handling therefore attracts continued interest. Here, non-uniform sampling (NUS) of 3D heteronuclear-resolved [(1)H,(1)H]-NOESY data yielded two- to three-fold savings of instrument time for structure determinations of soluble proteins. With the 152-residue protein NP_372339.1 from Staphylococcus aureus and the 71-residue protein NP_346341.1 from Streptococcus pneumonia we show that high-quality structures can be obtained with NUS NMR data, which are equally well amenable to robust automated analysis as the corresponding uniformly sampled data. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
MATHEMATICAL ROUTINES FOR ENGINEERS AND SCIENTISTS
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The purpose of this package is to provide the scientific and engineering community with a library of programs useful for performing routine mathematical manipulations. This collection of programs will enable scientists to concentrate on their work without having to write their own routines for solving common problems, thus saving considerable amounts of time. This package contains sixteen subroutines. Each is separately documented with descriptions of the invoking subroutine call, its required parameters, and a sample test program. The functions available include: maxima, minima, and sort of vectors; factorials; random number generator (uniform or Gaussian distribution); complimentary error function; fast Fourier Transformation; Simpson's Rule integration; matrix determinate and inversion; Bessel function (J Bessel function for any order, and modified Bessel function for zero order); roots of a polynomial; roots of non-linear equation; and the solution of first order ordinary differential equations using Hamming's predictor-corrector method. There is also a subroutine for using a dot matrix printer to plot a given set of y values for a uniformly increasing x value. This package is written in FORTRAN 77 (Super Soft Small System FORTRAN compiler) for batch execution and has been implemented on the IBM PC computer series under MS-DOS with a central memory requirement of approximately 28K of 8 bit bytes for all subroutines. This program was developed in 1986.
NASA Astrophysics Data System (ADS)
Popov, S. M.; Butov, O. V.; Chamorovski, Y. K.; Isaev, V. A.; Mégret, P.; Korobko, D. A.; Zolotovskii, I. O.; Fotiadi, A. A.
2018-06-01
We report on random lasing observed with 100-m-long fiber comprising an array of weak FBGs inscribed in the fiber core and uniformly distributed over the fiber length. Extended fluctuation-free oscilloscope traces highlight power dynamics typical for lasing. An additional piece of Er-doped fiber included into the laser cavity enables a stable laser generation with a linewidth narrower than 10 kHz.
Origin of magnetization in lunar breccias - An example of thermal overprinting
NASA Technical Reports Server (NTRS)
Gose, W. A.; Strangway, D. W.; Pearce, G. W.
1978-01-01
Twenty six samples from seven hand specimens, collected from the station 6 boulder at the Apollo 17 landing site, were studied magnetically. The boulder is a breccia consisting of three lithologic units distinguished by their clast population. The direction of magnetization of samples from unit B which is almost devoid of large clasts cluster fairly well after alternating field demagnetization. Samples from unit C which is characterized by abundant large clasts up to 1 m in size do not contain a uniform direction of magnetization but the distribution is not random. Based on these data we propose that the natural remanent magnetization (NRM) in these breccias is the vector sum of two magnetizations, a pre-impact magnetization and a partial thermoremanence acquired during breccia formation. The relative contribution of the two components is controlled by the thermal history of the ejecta, which in turn is determined by its clast population. Depending on the clast population, the NRM can be a total thermoremanence, a partial thermoremanence plus a pre-impact magnetization, or a pre-impact magnetization. This model of thermal overprinting might be applicable to all lunar breccias of medium and higher metamorphic grade.
NASA Astrophysics Data System (ADS)
Abada, S.; Salvi, L.; Courson, R.; Daran, E.; Reig, B.; Doucet, J. B.; Camps, T.; Bardinal, V.
2017-05-01
A method called ‘soft thermal printing’ (STP) was developed to ensure the optimal transfer of 50 µm-thick dry epoxy resist films (DF-1050) on small-sized samples. The aim was the uniform fabrication of high aspect ratio polymer-based MOEMS (micro-optical-electrical-mechanical system) on small and/or fragile samples, such as GaAs. The printing conditions were optimized, and the resulting thickness uniformity profiles were compared to those obtained via lamination and SU-8 standard spin-coating. Under the best conditions tested, STP and lamination produced similar results, with a maximum deviation to the central thickness of 3% along the sample surface, compared to greater than 40% for SU-8 spin-coating. Both methods were successfully applied to the collective fabrication of DF1050-based MOEMS designed for the dynamic focusing of VCSELs (vertical-cavity surface-emitting lasers). Similar, efficient electro-thermo-mechanical behaviour was obtained in both cases.
NASA Astrophysics Data System (ADS)
Quarles, C. A.; Sheffield, Thomas; Stacy, Scott; Yang, Chun
2009-03-01
The uniformity of rubber-carbon black composite materials has been investigated with positron Doppler Broadening Spectroscopy (DBS). The number of grams of carbon black (CB) mixed into one hundred grams of rubber, phr, is used to characterize a sample. A typical concentration for rubber in tires is 50 phr. The S parameter measured by DBS has been found to depend on the phr of the sample as well as the type of rubber and carbon black. The variation in carbon black concentration within a surface area of about 5 mm diameter can be measured by moving a standard Na-22 or Ge-68 positron source over an extended sample. The precision of the concentration measurement depends on the dwell time at a point on the sample. The time required to determine uniformity over an extended sample can be reduced by running with much higher counting rate than is typical in DBS and correcting for the systematic variation of S parameter with counting rate. Variation in CB concentration with mixing time at the level of about 0.5% has been observed.
Jiang, Hao; Zhang, Min; Mujumdar, Arun S; Lim, Rui-Xin
2014-07-01
To overcome the flaws of high energy consumption of freeze drying (FD) and the non-uniform drying of microwave freeze drying (MFD), pulse-spouted microwave vacuum drying (PSMVD) was developed. The results showed that the drying time can be dramatically shortened if microwave was used as the heating source. In this experiment, both MFD and PSMVD could shorten drying time by 50% as compared to the FD process. Depending on the heating method, MFD and PSMVD dried banana cubes showed trends of expansion while FD dried samples demonstrated trends of shrinkage. Shrinkage also brought intensive structure and highest fracturability of all three samples dried by different methods. The residual ascorbic acid content of PSMVD dried samples can be as high as in FD dried samples, which were superior to MFD dried samples. The tests confirmed that PSMVD could bring about better drying uniformity than MFD. Besides, compared with traditional MFD, PSMVD can provide better extrinsic feature, and can bring about improved nutritional features because of the higher residual ascorbic acid content. © 2013 Society of Chemical Industry.
Assessment of the LV-C2 Stack Sampling Probe Location for Compliance with ANSI/HPS N13.1-1999
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glissmeyer, John A.; Antonio, Ernest J.; Flaherty, Julia E.
2015-09-01
This document reports on a series of tests conducted to assess the proposed air sampling location for the Hanford Tank Waste Treatment and Immobilization Plant (WTP) Low-Activity Waste (LAW) C2V (LV-C2) exhaust stack with respect to the applicable criteria regarding the placement of an air sampling probe. Federal regulations require that a sampling probe be located in the exhaust stack according to the criteria established by the American National Standards Institute/Health Physics Society (ANSI/HPS) N13.1-1999, Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stack and Ducts of Nuclear Facilities. These criteria address the capability of the sampling probemore » to extract a sample that represents the effluent stream. The tests were conducted on the LV-C2 scale model system. Based on the scale model tests, the location proposed for the air sampling probe in the scale model stack meets the requirements of the ANSI/HPS N13.1-1999 standard for velocity uniformity, flow angle, gas tracer and particle tracer uniformity. Additional velocity uniformity and flow angle tests on the actual stack will be necessary during cold startup to confirm the validity of the scale model results in representing the actual stack.« less
Handling nonresponse in surveys: analytic corrections compared with converting nonresponders.
Jenkins, Paul; Earle-Richardson, Giulia; Burdick, Patrick; May, John
2008-02-01
A large health survey was combined with a simulation study to contrast the reduction in bias achieved by double sampling versus two weighting methods based on propensity scores. The survey used a census of one New York county and double sampling in six others. Propensity scores were modeled as a logistic function of demographic variables and were used in conjunction with a random uniform variate to simulate response in the census. These data were used to estimate the prevalence of chronic disease in a population whose parameters were defined as values from the census. Significant (p < 0.0001) predictors in the logistic function included multiple (vs. single) occupancy (odds ratio (OR) = 1.3), bank card ownership (OR = 2.1), gender (OR = 1.5), home ownership (OR = 1.3), head of household's age (OR = 1.4), and income >$18,000 (OR = 0.8). The model likelihood ratio chi-square was significant (p < 0.0001), with the area under the receiver operating characteristic curve = 0.59. Double-sampling estimates were marginally closer to population values than those from either weighting method. However, the variance was also greater (p < 0.01). The reduction in bias for point estimation from double sampling may be more than offset by the increased variance associated with this method.
Plasma assisted synthesis of vanadium pentoxide nanoplates
NASA Astrophysics Data System (ADS)
Singh, Megha; Sharma, Rabindar Kumar; Kumar, Prabhat; Reddy, G. B.
2015-08-01
In this work, we report the growth of α-V2O5 (orthorhombic) nanoplates on glass substrate using plasma assisted sublimation process (PASP) and Nickel as catalyst. 100 nm thick film of Ni is deposited over glass substrate by thermal evaporation process. Vanadium oxide nanoplates have been deposited treating vanadium metal foil under high vacuum conditions with oxygen plasma. Vanadium foil is kept at fixed temperature growth of nanoplates of V2O5 to take place. Samples grown have been studied using XPS, XRD and HRTEM to confirm the growth of α-phase of V2O5, which revealed pure single crystal of α- V2O5 in orthorhombic crystallographic plane. Surface morphological studies using SEM and TEM show nanostructured thin film in form of plates. Uniform, vertically aligned randomly oriented nanoplates of V2O5 have been deposited.
Gravitational Effects on Closed-Cellular-Foam Microstructure
NASA Technical Reports Server (NTRS)
Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas
1996-01-01
Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.
Motility of Escherichia coli in a quasi-two-dimensional porous medium.
Sosa-Hernández, Juan Eduardo; Santillán, Moisés; Santana-Solano, Jesús
2017-03-01
Bacterial migration through confined spaces is critical for several phenomena, such as biofilm formation, bacterial transport in soils, and bacterial therapy against cancer. In the present work, E. coli (strain K12-MG1655 WT) motility was characterized by recording and analyzing individual bacterium trajectories in a simulated quasi-two-dimensional porous medium. The porous medium was simulated by enclosing, between slide and cover slip, a bacterial-culture sample mixed with uniform 2.98-μm-diameter spherical latex particles. The porosity of the medium was controlled by changing the latex particle concentration. By statistically analyzing several trajectory parameters (instantaneous velocity, turn angle, mean squared displacement, etc.), and contrasting with the results of a random-walk model developed ad hoc, we were able to quantify the effects that different obstacle concentrations have upon bacterial motility.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Layer uniformity in glucose oxidase immobilization on SiO 2 surfaces
NASA Astrophysics Data System (ADS)
Libertino, Sebania; Scandurra, Antonino; Aiello, Venera; Giannazzo, Filippo; Sinatra, Fulvia; Renis, Marcella; Fichera, Manuela
2007-09-01
The goal of this work was the characterization, step by step, of the enzyme glucose oxidase (GOx) immobilization on silicon oxide surfaces, mainly by means of X-Ray photoelectron spectroscopy (XPS). The immobilization protocol consists of four steps: oxide activation, silanization, linker molecule deposition and GOx immobilization. The linker molecule, glutaraldehyde (GA) in this study, must be able to form a uniform layer on the sample surface in order to maximize the sites available for enzyme bonding and achieve the best enzyme deposition. Using a thin SiO 2 layer grown on Si wafers and following the XPS Si2p signal of the Si substrate during the immobilization steps, we demonstrated both the glutaraldehyde layer uniformity and the possibility to use XPS to monitor thin layer uniformity. In fact, the XPS substrate signal, not shielded by the oxide, is suppressed only when a uniform layer is deposited. The enzyme correct immobilization was monitored using the XPS C1s and N1s signals. Atomic force microscopy (AFM) measurements carried out on the same samples confirmed the results.
Bulk, rare earth, and other trace elements in Apollo 14 and 15 and Luna 16 samples.
NASA Technical Reports Server (NTRS)
Laul, J. C.; Wakita, H.; Showalter, D. L.; Boynton, W. V.; Schmitt, R. A.
1972-01-01
Measurement of 24 and 34 bulk, minor, and trace elements in lunar specimens by instrumental and radiochemical neutron activation analysis shows greater Al2O3, Na2O, and K2O abundances and higher TiO2, FeO, MnO and Cr2O3 depletions in Apollo 14 soil samples as compared to Apollo 11 samples and to most of Apollo 12 samples. The uniform abundances in 14230 core tube soils and three other Apollo 14 soils indicate that the regolith is uniform to at least 22 cm depth and within about 200 m from the lunar module.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, R.; Skupsky, S.
1990-08-01
The uniformity of focused laser beams, that has been modified with randomly phased distributed phase plates (C. B. Burckhardt, Appl. Opt. {bold 9}, 695 (1970); Kato and Mima, Appl. Phys. B {bold 29}, 186 (1982); Kato {ital et} {ital al}., Phys. Rev. Lett. {bold 53}, 1057 (1984); LLE Rev. {bold 33}, 1 (1987)), can be improved further by constructing patterns of phase elements which minimize phase correlations over small separations. Long-wavelength nonuniformities in the intensity distribution, which are relatively difficult to overcome in the target by thermal smoothing and in the laser by, e.g., spectral dispersion (Skupsky {ital et} {italmore » al}., J. Appl. Phys. {bold 66}, 3456 (1989); LLE Rev. {bold 36}, 158 (1989); {bold 37}, 29 (1989); {bold 37}, 40 (1989)), result largely from short-range phase correlations between phase plate elements. To reduce the long-wavelength structure, we have constructed phase patterns with smaller short-range correlations than would occur randomly. Calculations show that long-wavelength nonuniformities in single-beam intensity patterns can be reduced with these masks when the intrinsic phase error of the beam falls below certain limits. We show the effect of this improvement on uniformity for spherical irradiation by a multibeam system.« less
Evaluating a Two-Step Approach to Sexual Risk Reduction in a Publicly-Funded STI Clinic
Carey, Michael P.; Vanable, Peter A.; Senn, Theresa E.; Coury-Doniger, Patricia; Urban, Marguerite A.
2008-01-01
Background Sexually transmitted infection (STI) clinics provide an opportune setting for HIV prevention efforts. This randomized controlled trial evaluated a unique, two-step approach to sexual risk reduction at a publicly-funded STI clinic. Methods During an initial visit, patients completed an audio-computer assisted self-interview (ACASI), were randomized to and received one of two brief interventions, obtained medical care, and completed a post-assessment. Next, two-thirds of the patients were assigned to attend an intensive sexual risk reduction workshop. At 3, 6, and 12 months, patients completed additional ACASIs and provided urine specimens to assess behavior change and incident STIs. Results During a 28-month interval, 5613 patients were screened, 2691 were eligible, and 1483 consented to participate and were randomized; the modal reason for declining was lack of time (82%). Consenting patients included 688 women and 795 men; 64% of participants were African-American. The sample was low-income with 57% reporting an annual income of less than $15,000; most participants (62%) had a high school education or less, and 51% were unemployed. Sexual risk behavior was common, as indicated by multiple sexual partners (mean = 32.8, lifetime; mean = 2.8, past 3 months), unprotected sex (mean = 17.3 episodes, past 3 months), and prior STIs (mean = 3.3, lifetime; 23% at baseline). Bivariate analyses confirmed our prediction that HIV-related motivation and behavioral skills would be related to current sexual risk behavior. All patients received a brief intervention; patient satisfaction ratings were uniformly high for both interventions (all means ≥ 3.7 on 4-point scales). Fifty-six percent of invited patients attended the intensive workshop, and attendance did not differ as a function of brief intervention. Patient satisfaction ratings were also uniformly positive for the workshop interventions (all means ≥ 3.6). Return to follow-up assessments exceeded 70%. Conclusions Results demonstrate that implementing an HIV preventive program in a busy, public clinic is feasible and well-accepted by patients. Ongoing evaluation will determine if the interventions reduce sexual risk behavior and lower incident STIs. PMID:18325853
Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
In Darwinian evolution, feedback from natural selection leads to biased mutations.
Caporale, Lynn Helena; Doyle, John
2013-12-01
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.
Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors
NASA Astrophysics Data System (ADS)
Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.
2015-08-01
In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.
Kansas Adult Observational Safety Belt Usage Rates
DOT National Transportation Integrated Search
2011-07-01
Methodology of Adult Survey - based on the federal guidelines in the Uniform Criteria manual. The Kansas survey is performed at 548 sites on 6 different road types in 20 randomly selected counties which encompass 85% of the population of Kansas. The ...
Visual search by chimpanzees (Pan): assessment of controlling relations.
Tomonaga, M
1995-01-01
Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449
Dissolved oxygen as an indicator of bioavailable dissolved organic carbon in groundwater.
Chapelle, Francis H; Bradley, Paul M; McMahon, Peter B; Kaiser, Karl; Benner, Ron
2012-01-01
Concentrations of dissolved oxygen (DO) plotted vs. dissolved organic carbon (DOC) in groundwater samples taken from a coastal plain aquifer of South Carolina (SC) showed a statistically significant hyperbolic relationship. In contrast, DO-DOC plots of groundwater samples taken from the eastern San Joaquin Valley of California (CA) showed a random scatter. It was hypothesized that differences in the bioavailability of naturally occurring DOC might contribute to these observations. This hypothesis was examined by comparing nine different biochemical indicators of DOC bioavailability in groundwater sampled from these two systems. Concentrations of DOC, total hydrolysable neutral sugars (THNS), total hydrolysable amino acids (THAA), mole% glycine of THAA, initial bacterial cell counts, bacterial growth rates, and carbon dioxide production/consumption were greater in SC samples relative to CA samples. In contrast, the mole% glucose of THNS and the aromaticity (SUVA(254)) of DOC was greater in CA samples. Each of these indicator parameters were observed to change with depth in the SC system in a manner consistent with active biodegradation. These results are uniformly consistent with the hypothesis that the bioavailability of DOC is greater in SC relative to CA groundwater samples. This, in turn, suggests that the presence/absence of a hyperbolic DO-DOC relationship may be a qualitative indicator of relative DOC bioavailability in groundwater systems. Ground Water © 2011, National Ground Water Association. Published 2011. This article is a U.S. Government work and is in the public domain in the USA.
NASA Technical Reports Server (NTRS)
Uribe, Roberto M.; Filppi, Ed; Zhang, Shubo
2007-01-01
It is common to have liquid crystal displays and electronic circuit boards with area sizes of the order of 20x20 sq cm on board of satellites and space vehicles. Usually irradiating them at different fluence values assesses the radiation damage in these types of devices. As a result, there is a need for a radiation source with large spatial fluence uniformity for the study of the damage by radiation from space in those devices. Kent State University s Program on Electron Beam Technology has access to an electron accelerator used for both research and industrial applications. The electron accelerator produces electrons with energies in the interval from 1 to 5 MeV and a maximum beam power of 150 kW. At such high power levels, the electron beam is continuously scanned back and forth in one dimension in order to provide uniform irradiation and to prevent damage to the sample. This allows for the uniform irradiation of samples with an area of up to 1.32 sq m. This accelerator has been used in the past for the study of radiation damage in solar cells (1). However in order to irradiate extended area solar cells there was a need to measure the uniformity of the irradiation zone in terms of fluence. In this paper the methodology to measure the fluence uniformity on a sample handling system (linear motion system), used for the irradiation of research samples, along the irradiation zone of the above-mentioned facility is described and the results presented. We also illustrate the use of the electron accelerator for the irradiation of large area solar cells (of the order of 156 sq cm) and include in this paper the electrical characterization of these types of solar cells irradiated with 5 MeV electrons to a total fluence of 2.6 x 10(exp 15) e/sq cm.
Vaughn, Meagan F; Funkhouser, Sheana Whelan; Lin, Feng-Chang; Fine, Jason; Juliano, Jonathan J; Apperson, Charles S; Meshnick, Steven R
2014-05-01
Because of frequent exposure to tick habitats, outdoor workers are at high risk for tick-borne diseases. Adherence to National Institute for Occupational Safety and Health-recommended tick bite prevention methods is poor. A factory-based method for permethrin impregnation of clothing that provides long-lasting insecticidal and repellent activity is commercially available, and studies are needed to assess the long-term effectiveness of this clothing under field conditions. To evaluate the protective effectiveness of long-lasting permethrin impregnated uniforms among a cohort of North Carolina outdoor workers. A double-blind RCT was conducted between March 2011 and September 2012. Subjects included outdoor workers from North Carolina State Divisions of Forestry, Parks and Recreation, and Wildlife who worked in eastern or central North Carolina. A total of 159 volunteer subjects were randomized, and 127 and 101 subjects completed the first and second years of follow-up, respectively. Uniforms of participants in the treatment group were factory-impregnated with long-lasting permethrin whereas control group uniforms received a sham treatment. Participants continued to engage in their usual tick bite prevention activities. Incidence of work-related tick bites reported on weekly tick bite logs. Study subjects reported 1,045 work-related tick bites over 5,251 person-weeks of follow-up. The mean number of reported tick bites in the year prior to enrollment was similar for both the treatment and control groups, but markedly different during the study period. In our analysis conducted in 2013, the effectiveness of long-lasting permethrin impregnated uniforms for the prevention of work-related tick bites was 0.82 (95% CI=0.66, 0.91) and 0.34 (95% CI=-0.67, 0.74) for the first and second years of follow-up. These results indicate that long-lasting permethrin impregnated uniforms are highly effective for at least 1 year in deterring tick bites in the context of typical tick bite prevention measures employed by outdoor workers. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
Quasirandom geometric networks from low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2017-08-01
We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.
Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-01-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
[Krigle estimation and its simulated sampling of Chilo suppressalis population density].
Yuan, Zheming; Bai, Lianyang; Wang, Kuiwu; Hu, Xiangyue
2004-07-01
In order to draw up a rational sampling plan for the larvae population of Chilo suppressalis, an original population and its two derivative populations, random population and sequence population, were sampled and compared with random sampling, gap-range-random sampling, and a new systematic sampling integrated Krigle interpolation and random original position. As for the original population whose distribution was up to aggregative and dependence range in line direction was 115 cm (6.9 units), gap-range-random sampling in line direction was more precise than random sampling. Distinguishing the population pattern correctly is the key to get a better precision. Gap-range-random sampling and random sampling are fit for aggregated population and random population, respectively, but both of them are difficult to apply in practice. Therefore, a new systematic sampling named as Krigle sample (n = 441) was developed to estimate the density of partial sample (partial estimation, n = 441) and population (overall estimation, N = 1500). As for original population, the estimated precision of Krigle sample to partial sample and population was better than that of investigation sample. With the increase of the aggregation intensity of population, Krigel sample was more effective than investigation sample in both partial estimation and overall estimation in the appropriate sampling gap according to the dependence range.
Asynchronous signal-dependent non-uniform sampler
NASA Astrophysics Data System (ADS)
Can-Cimino, Azime; Chaparro, Luis F.; Sejdić, Ervin
2014-05-01
Analog sparse signals resulting from biomedical and sensing network applications are typically non-stationary with frequency-varying spectra. By ignoring that the maximum frequency of their spectra is changing, uniform sampling of sparse signals collects unnecessary samples in quiescent segments of the signal. A more appropriate sampling approach would be signal-dependent. Moreover, in many of these applications power consumption and analog processing are issues of great importance that need to be considered. In this paper we present a signal dependent non-uniform sampler that uses a Modified Asynchronous Sigma Delta Modulator which consumes low-power and can be processed using analog procedures. Using Prolate Spheroidal Wave Functions (PSWF) interpolation of the original signal is performed, thus giving an asynchronous analog to digital and digital to analog conversion. Stable solutions are obtained by using modulated PSWFs functions. The advantage of the adapted asynchronous sampler is that range of frequencies of the sparse signal is taken into account avoiding aliasing. Moreover, it requires saving only the zero-crossing times of the non-uniform samples, or their differences, and the reconstruction can be done using their quantized values and a PSWF-based interpolation. The range of frequencies analyzed can be changed and the sampler can be implemented as a bank of filters for unknown range of frequencies. The performance of the proposed algorithm is illustrated with an electroencephalogram (EEG) signal.
Effects of fixture rotation on coating uniformity for high-performance optical filter fabrication
NASA Astrophysics Data System (ADS)
Rubin, Binyamin; George, Jason; Singhal, Riju
2018-04-01
Coating uniformity is critical in fabricating high-performance optical filters by various vacuum deposition methods. Simple and planetary rotation systems with shadow masks are used to achieve the required uniformity [J. B. Oliver and D. Talbot, Appl. Optics 45, 13, 3097 (2006); O. Lyngnes, K. Kraus, A. Ode and T. Erguder, in `Method for Designing Coating Thickness Uniformity Shadow Masks for Deposition Systems with a Planetary Fixture', 2014 Technical Conference Proceedings, Optical Coatings, August 13, 2014, DOI: 10.14332/svc14.proc.1817.]. In this work, we discuss the effect of rotation pattern and speed on thickness uniformity in an ion beam sputter deposition system. Numerical modeling is used to determine statistical distribution of random thickness errors in coating layers. The relationship between thickness tolerance and production yield are simulated theoretically and demonstrated experimentally. Production yields for different optical filters produced in an ion beam deposition system with planetary rotation are presented. Single-wavelength and broadband optical monitoring systems were used for endpoint monitoring during filter deposition. Limitations of thickness tolerances that can be achieved in systems with planetary rotation are shown. Paths for improving production yield in an ion beam deposition system are described.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Ultra-accelerated natural sunlight exposure testing
Jorgensen, Gary J.; Bingham, Carl; Goggin, Rita; Lewandowski, Allan A.; Netter, Judy C.
2000-06-13
Process and apparatus for providing ultra accelerated natural sunlight exposure testing of samples under controlled weathering without introducing unrealistic failure mechanisms in exposed materials and without breaking reciprocity relationships between flux exposure levels and cumulative dose that includes multiple concurrent levels of temperature and relative humidity at high levels of natural sunlight comprising: a) concentrating solar flux uniformly; b) directing the controlled uniform sunlight onto sample materials in a chamber enclosing multiple concurrent levels of temperature and relative humidity to allow the sample materials to be subjected to accelerated irradiance exposure factors for a sufficient period of time in days to provide a corresponding time of about at least a years worth of representative weathering of the sample materials.
Korobov, A
2009-03-01
Discrete random tessellations appear not infrequently in describing nucleation and growth transformations. Generally, several non-Euclidean metrics are possible in this case. Previously [A. Korobov, Phys. Rev. B 76, 085430 (2007)] continual analogs of such tessellations have been studied. Here one of the simplest discrete varieties of the Kolmogorov-Johnson-Mehl-Avrami model, namely, the model with von Neumann neighborhoods, has been examined per se, i.e., without continualization. The tessellation is uniform in the sense that domain boundaries consist of tiles. Similarities and distinctions between discrete and continual models are discussed.
Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.
Hibino, Kenichi; Kim, Yangjin
2016-08-10
In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.
Systematic versus random sampling in stereological studies.
West, Mark J
2012-12-01
The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.
Weighted re-randomization tests for minimization with unbalanced allocation.
Han, Baoguang; Yu, Menggang; McEntegart, Damian
2013-01-01
Re-randomization test has been considered as a robust alternative to the traditional population model-based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate-adaptive randomization method for ensuring balance among prognostic factors. Among various re-randomization tests, fixed-entry-order re-randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed-entry-order re-randomization test is biased and thus compromised in power. We find that the bias is due to non-uniform re-allocation probabilities incurred by the re-randomization in this case. We therefore propose a weighted fixed-entry-order re-randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re-randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Studies of visual detection of a signal superimposed on one of two identical backgrounds show performance degradation when the background has high contrast and is similar in spatial frequency and/or orientation to the signal. To account for this finding, models include a contrast gain control mechanism that pools activity across spatial frequency, orientation and space to inhibit (divisively) the response of the receptor sensitive to the signal. In tasks in which the observer has to detect a known signal added to one of M different backgrounds grounds due to added visual noise, the main sources of degradation are the stochastic noise in the image and the suboptimal visual processing. We investigate how these two sources of degradation (contrast gain control and variations in the background) interact in a task in which the signal is embedded in one of M locations in a complex spatially varying background (structured background). We use backgrounds extracted from patient digital medical images. To isolate effects of the fixed deterministic background (the contrast gain control) from the effects of the background variations, we conduct detection experiments with three different background conditions: (1) uniform background, (2) a repeated sample of structured background, and (3) different samples of structured background. Results show that human visual detection degrades from the uniform background condition to the repeated background condition and degrades even further in the different backgrounds condition. These results suggest that both the contrast gain control mechanism and the background random variations degrade human performance in detection of a signal in a complex, spatially varying background. A filter model and added white noise are used to generate estimates of sampling efficiencies, an equivalent internal noise, an equivalent contrast-gain-control-induced noise, and an equivalent noise due to the variations in the structured background.
Hall, Damien
2010-03-15
Observations of the motion of individual molecules in the membrane of a number of different cell types have led to the suggestion that the outer membrane of many eukaryotic cells may be effectively partitioned into microdomains. A major cause of this suggested partitioning is believed to be due to the direct/indirect association of the cytosolic face of the cell membrane with the cortical cytoskeleton. Such intimate association is thought to introduce effective hydrodynamic barriers into the membrane that are capable of frustrating molecular Brownian motion over distance scales greater than the average size of the compartment. To date, the standard analytical method for deducing compartment characteristics has relied on observing the random walk behavior of a labeled lipid or protein at various temporal frequencies and different total lengths of time. Simple theoretical arguments suggest that the presence of restrictive barriers imparts a characteristic turnover to a plot of mean squared displacement versus sampling period that can be interpreted to yield the average dimensions of the compartment expressed as the respective side lengths of a rectangle. In the following series of articles, we used computer simulation methods to investigate how well the conventional analytical strategy coped with heterogeneity in size, shape, and barrier permeability of the cell membrane compartments. We also explored questions relating to the necessary extent of sampling required (with regard to both the recorded time of a single trajectory and the number of trajectories included in the measurement bin) for faithful representation of the actual distribution of compartment sizes found using the SPT technique. In the current investigation, we turned our attention to the analytical characterization of diffusion through cell membrane compartments having both a uniform size and permeability. For this ideal case, we found that (i) an optimum sampling time interval existed for the analysis and (ii) the total length of time for which a trajectory was recorded was a key factor. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Explicit equilibria in a kinetic model of gambling
NASA Astrophysics Data System (ADS)
Bassetti, F.; Toscani, G.
2010-06-01
We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.
Experimental and numerical modeling research of rubber material during microwave heating process
NASA Astrophysics Data System (ADS)
Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling
2018-05-01
This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.
Reproducible direct exposure environmental testing of metal-based magnetic media
NASA Technical Reports Server (NTRS)
Sides, Paul J.
1994-01-01
A flow geometry and flow rate for mixed flowing gas testing is proposed. Use of an impinging jet of humid polluted air can provide a uniform and reproducible exposure of coupons of metal-based magnetic media. Numerical analysis of the fluid flow and mass transfer in such as system has shown that samples confined within a distance equal to the nozzle radius on the surface of impingement are uniformly accessible to pollutants in the impinging gas phase. The critical factor is the nozzle height above the surface of impingement. In particular, the uniformity of exposure is less than plus/minus 2% for a volumetric flow rate of 1600 cm(exp 3)/minute total flow with the following specifications: For a one inch nozzle, the height of the nozzle opening above the stage should be 0.177 inches; for a 2 inch nozzle - 0.390 inches. Not only is the distribution uniform, but one can calculate the maximum delivery rate of pollutants to the samples for comparison with the observed deterioration.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Ranking and clustering of nodes in networks with smart teleportation
NASA Astrophysics Data System (ADS)
Lambiotte, R.; Rosvall, M.
2012-05-01
Random teleportation is a necessary evil for ranking and clustering directed networks based on random walks. Teleportation enables ergodic solutions, but the solutions must necessarily depend on the exact implementation and parametrization of the teleportation. For example, in the commonly used PageRank algorithm, the teleportation rate must trade off a heavily biased solution with a uniform solution. Here we show that teleportation to links rather than nodes enables a much smoother trade-off and effectively more robust results. We also show that, by not recording the teleportation steps of the random walker, we can further reduce the effect of teleportation with dramatic effects on clustering.
Uniform deposition of size-selected clusters using Lissajous scanning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp; Hirata, Hirohito
2016-05-15
Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonalmore » directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.« less
Thin films with disordered nanohole patterns for solar radiation absorbers
NASA Astrophysics Data System (ADS)
Fang, Xing; Lou, Minhan; Bao, Hua; Zhao, C. Y.
2015-06-01
The radiation absorption in thin films with three disordered nanohole patterns, i.e., random position, non-uniform radius, and amorphous pattern, are numerically investigated by finite-difference time-domain (FDTD) simulations. Disorder can alter the absorption spectra and has an impact on the broadband absorption performance. Compared to random position and non-uniform radius nanoholes, amorphous pattern can induce a much better integrated absorption. The power density spectra indicate that amorphous pattern nanoholes reduce the symmetry and provide more resonance modes that are desired for the broadband absorption. The application condition for amorphous pattern nanoholes shows that they are much more appropriate in absorption enhancement for weak absorption materials. Amorphous silicon thin films with disordered nanohole patterns are applied in solar radiation absorbers. Four configurations of thin films with different nanohole patterns show that interference between layers in absorbers will change the absorption performance. Therefore, it is necessary to optimize the whole radiation absorbers although single thin film with amorphous pattern nanohole has reached optimal absorption.
Pattern-projected schlieren imaging method using a diffractive optics element
NASA Astrophysics Data System (ADS)
Min, Gihyeon; Lee, Byung-Tak; Kim, Nac Woo; Lee, Munseob
2018-04-01
We propose a novel schlieren imaging method by projecting a random dot pattern, which is generated in a light source module that includes a diffractive optical element. All apparatuses are located in the source side, which leads to one-body sensor applications. This pattern is distorted by the deflections of schlieren objects such that the displacement vectors of random dots in the pixels can be obtained using the particle image velocity algorithm. The air turbulences induced by a burning candle, boiling pot, heater, and gas torch were successfully imaged, and it was shown that imaging up to a size of 0.7 m × 0.57 m is possible. An algorithm to correct the non-uniform sensitivity according to the position of a schlieren object was analytically derived. This algorithm was applied to schlieren images of lenses. Comparing the corrected versions to the original schlieren images, we showed a corrected uniform sensitivity of 14.15 times on average.
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Molin, S.
2012-02-01
We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.
2004-07-01
sampler, project manager, data reviewer, statistician , risk assessor, assessment personnel, and laboratory QC manager. In addition, a complete copy of...sample • Corrective actions to be taken if the QC sample fails these criteria • A description of how the QC data and results are to be documented and...Intergovernmental Data Quality Task Force Uniform Federal Policy for Quality Assurance Project Plans Evaluating, Assessing, and Documenting
Spatially-resolved HPGe Gamma-ray Spectroscopy of Swipe Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, Benjamin S.; VanDevender, Brent A.; Wood, Lynn S.
Measurement of swipe samples is a critical element of the National Technical Nuclear Forensics (NTNF) mission. A unique, portable, germanium gamma imager (GeGI-s) from PHDS Co may provide complementary information to current techniques for swipe sample screening. The GeGI-s is a modified version of the commercial GeGI-4, a planar HPGe detector, capable of several million counts per second across the whole detector. The GeGI-s detector is a prototype of a commercial off-the-shelf high rate GeGI. The high rate capability allows high-activity samples be placed directly on the face of the detector. Utilizing the high energy resolution and pixelization of themore » detector, the GeGI-s can generate isotope specific spatial maps of the materials on the swipe sample. To prove this technology is viable for such mapping, the GeGI-s detector response to spatially distributed events must be well characterized. The detection efficiency as a function of location has been characterized to understand the non-uniformities presented as a collimated photon beam was rastered vertically and horizontally across the face of the detector. The detection efficiency as a function of location has been characterized to understand the non-uniformities presented as a collimated photon beam was rastered vertically and horizontally across the face of the detector. The response was found to be primarily uniform and symmetric, however two causes of non-uniformity were found. Both of these causes can ultimately be corrected for in off-line data analysis.« less
Strong profiling is not mathematically optimal for discovering rare malfeasors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Press, William H
2008-01-01
In a large population of individuals labeled j = 1,2,...,N, governments attempt to find the rare malfeasor j = j, (terrorist, for example) by making use of priors p{sub j} that estimate the probability of individual j being a malfeasor. Societal resources for secondary random screening such as airport search or police investigation are concentrated against individuals with the largest priors. They may call this 'strong profiling' if the concentration is at least proportional to p{sub j} for the largest values. Strong profiling often results in higher probability, but otherwise innocent, individuals being repeatedly subjected to screening. They show heremore » that, entirely apart from considerations of social policy, strong profiling is not mathematically optimal at finding malfeasors. Even if prior probabilities were accurate, their optimal use would be only as roughly the geometric mean between a strong profiling and a completely uniform sampling of the population.« less
Micromagnetic modeling of the shielding properties of nanoscale ferromagnetic layers
NASA Astrophysics Data System (ADS)
Iskandarova, I. M.; Knizhnik, A. A.; Popkov, A. F.; Potapkin, B. V.; Stainer, Q.; Lombard, L.; Mackay, K.
2016-09-01
Ferromagnetic shields are widely used to concentrate magnetic fields in a target region of space. Such shields are also used in spintronic nanodevices such as magnetic random access memory and magnetic logic devices. However, the shielding properties of nanostructured shields can differ considerably from those of macroscopic samples. In this work, we investigate the shielding properties of nanostructured NiFe layers around a current line using a finite element micromagnetic model. We find that thin ferromagnetic layers demonstrate saturation of magnetization under an external magnetic field, which reduces the shielding efficiency. Moreover, we show that the shielding properties of nanoscale ferromagnetic layers strongly depend on the uniformity of the layer thickness. Magnetic anisotropy in ultrathin ferromagnetic layers can also influence their shielding efficiency. In addition, we show that domain walls in nanoscale ferromagnetic shields can induce large increases and decreases in the generated magnetic field. Therefore, ferromagnetic shields for spintronic nanodevices require careful design and precise fabrication.
Survey of Usual Practice: Dysphagia Therapy in Head & Neck Cancer Patients
Krisciunas, Gintas P.; Sokoloff, William; Stepas, Katherine; Langmore, Susan E.
2012-01-01
There is no standardized dysphagia therapy for head and neck cancer patients and scant evidence to support any particular protocol, leaving institutions and individual speech language pathologists (SLPs) to determine their own protocols based on “typical” practices or anecdotal evidence. To gain an understanding of current usual practices, a national internet-based survey was developed and disseminated to SLPs who treat HNC patients. From a random sample of 4,000 ASHA SID 13 members, 1,931 fit the inclusion criteria, and 759 complete responses were recorded for a 39.3% response rate. Results were analyzed by institution type as well as by individual clinical experience. While some interesting trends emerged from the data, a lack of uniformity and consensus regarding best practices was apparent. This is undoubtedly due to a paucity of research adequately addressing the efficacy of any one therapy for dysphagia in the HNC population. PMID:22456699
Nursing contributions to chronic disease management in primary care.
Lukewich, Julia; Edge, Dana S; VanDenKerkhof, Elizabeth; Tranmer, Joan
2014-02-01
As the prevalence of chronic diseases continues to increase, emphasis is being placed on the development of primary care strategies that enhance healthcare delivery. Innovations include interprofessional healthcare teams and chronic disease management strategies. To determine the roles of nurses working in primary care settings in Ontario and the extent to which chronic disease management strategies have been implemented. We conducted a cross-sectional survey of a random sample of primary care nurses, including registered practical nurses, registered nurses, and nurse practitioners, in Ontario between May and July 2011. Nurses in primary care reported engaging in chronic disease management activities but to different extents depending on their regulatory designation (licensure category). Chronic disease management strategy implementation was not uniform across primary care practices where the nurses worked. There is the potential to optimize and standardize the nursing role within primary care and improve the implementation of chronic disease management strategies.
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Q. Y.; Fu, Ricky K. Y.; Chu, Paul K.
2009-08-10
The implantation energy and retained dose uniformity in enhanced glow discharge plasma immersion ion implantation (EGD-PIII) is investigated numerically and experimentally. Depth profiles obtained from different samples processed by EGD-PIII and traditional PIII are compared. The retained doses under different pulse widths are calculated by integrating the area under the depth profiles. Our results indicate that the improvement in the impact energy and retained dose uniformity by this technique is remarkable.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Effect of Uniform Design on the Speed of Combat Tourniquet Application: A Simulation Study.
Higgs, Andrew R; Maughon, Michael J; Ruland, Robert T; Reade, Michael C
2016-08-01
Tourniquets are issued to deployed members of both the United States (U.S. military and the Australian Defence Force (ADF). The ease of removing the tourniquet from the pocket of the combat uniform may influence its time to application. The ADF uniform uses buttons to secure the pocket, whereas the U.S. uniform uses a hook and loop fastener system. National differences in training may influence the time to and effectiveness of tourniquet application. To compare the time taken to retrieve and apply a tourniquet from the pocket of the Australian and the U.S. combat uniform and compare the effectiveness of tourniquet application. Twenty participants from both nations were randomly selected. Participants were timed on their ability to remove a tourniquet from their pockets and then apply it effectively. The U.S. personnel removed their tourniquets in shorter time (median 2.5 seconds) than Australians (median 5.72 seconds, p < 0.0001). ADF members (mean 41.36 seconds vs. 58.87 seconds, p < 0.037) applied the tourniquet more rapidly once removed from the pocket and trended to apply it more effectively (p = 0.1). The closure system of pockets on the combat uniform might influence the time taken to apply a tourniquet. Regular training might also reduce the time taken to apply a tourniquet effectively. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
NASA Astrophysics Data System (ADS)
Boning, Duane S.; Chung, James E.
1998-11-01
Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.
Assessment of the Revised 3410 Building Filtered Exhaust Stack Sampling Probe Location
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Xiao-Ying; Recknagle, Kurtis P.; Glissmeyer, John A.
2013-12-01
In order to support the air emissions permit for the 3410 Building, Pacific Northwest National Laboratory performed a series of tests in the exhaust air discharge from the reconfigured 3410 Building Filtered Exhaust Stack. The objective was to determine whether the location of the air sampling probe for emissions monitoring meets the applicable regulatory criteria governing such effluent monitoring systems. In particular, the capability of the air sampling probe location to meet the acceptance criteria of ANSI/HPS N13.1-2011 , Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stack and Ducts of Nuclear Facilities was determined. The qualification criteriamore » for these types of stacks address 1) uniformity of air velocity, 2) sufficiently small flow angle with respect to the axis of the duct, 3) uniformity of tracer gas concentration, and 4) uniformity of tracer particle concentration. Testing was performed to conform to the quality requirements of NQA-1-2000. Fan configurations tested included all fan combinations of any two fans at a time. Most of the tests were conducted at the normal flow rate, while a small subset of tests was performed at a slightly higher flow rate achieved with the laboratory hood sashes fully open. The qualification criteria for an air monitoring probe location are taken from ANSI/HPS N13.1-2011 and are paraphrased as follows with key results summarized: 1. Angular Flow—The average air velocity angle must not deviate from the axis of the stack or duct by more than 20°. Our test results show that the mean angular flow angles at the center two-thirds of the ducts are smaller than 4.5% for all testing conditions. 2. Uniform Air Velocity—The acceptance criterion is that the COV of the air velocity must be ≤ 20% across the center two thirds of the area of the stack. Our results show that the COVs of the air velocity across the center two-thirds of the stack are smaller than 2.9% for all testing conditions. 3. Uniform Concentration of Tracer Gases—The uniformity of the concentration of potential contaminants is first tested using a tracer gas to represent gaseous effluents. The tracer is injected downstream of the fan outlets and at the junction downstream fan discharges meet. The acceptance criteria are that 1) the COV of the measured tracer gas concentration is ≤20% across the center two-thirds of the sampling plane and 2) at no point in the sampling plane does the concentration vary from the mean by >30%. Our test results show that 1) the COV of the measured tracer gas concentration is < 2.9% for all test conditions and 2) at no point in the sampling plane does the concentration vary from the mean by >6.5%. 4. Uniform Concentration of Tracer Particles—Tracer particles of 10-μm aerodynamic diameter are used for the second demonstration of concentration uniformity. The acceptance criterion is that the COV of particle concentration is ≤ 20% across the center two thirds of the sampling plane. Our test results indicate that the COV of particle concentration is <9.9% across the center two-thirds of the sampling plane among all testing conditions. Thus, the reconfigured 3410 Building Filtered Exhaust Stack was determined to meet the qualification criteria given in the ANSI/HPS N13.1-2011 standard. Changes to the system configuration or operations outside the bounds described in this report (e.g., exhaust stack velocity changes, relocation of sampling probe, and addition of fans) may require re-testing or re-evaluation to determine compliance.« less
Single-walled carbon nanotubes coated with ZnO by atomic layer deposition
NASA Astrophysics Data System (ADS)
Pal, Partha P.; Gilshteyn, Evgenia; Jiang, Hua; Timmermans, Marina; Kaskela, Antti; Tolochko, Oleg V.; Kurochkin, Alexey V.; Karppinen, Maarit; Nisula, Mikko; Kauppinen, Esko I.; Nasibulin, Albert G.
2016-12-01
The possibility of ZnO deposition on the surface of single-walled carbon nanotubes (SWCNTs) with the help of an atomic layer deposition (ALD) technique was successfully demonstrated. The utilization of pristine SWCNTs as a support resulted in a non-uniform deposition of ZnO in the form of nanoparticles. To achieve uniform ZnO coating, the SWCNTs first needed to be functionalized by treating the samples in a controlled ozone atmosphere. The uniformly ZnO coated SWCNTs were used to fabricate UV sensing devices. An UV irradiation of the ZnO coated samples turned them from hydrophobic to hydrophilic behaviour. Furthermore, thin films of the ZnO coated SWCNTs allowed us switch p-type field effect transistors made of pristine SWCNTs to have ambipolar characteristics.
Single-walled carbon nanotubes coated with ZnO by atomic layer deposition.
Pal, Partha P; Gilshteyn, Evgenia; Jiang, Hua; Timmermans, Marina; Kaskela, Antti; Tolochko, Oleg V; Karppinen, Maarit; Nisula, Mikko; Kauppinen, Esko I; Nasibulin, Albert G
2016-12-02
The possibility of ZnO deposition on the surface of single-walled carbon nanotubes (SWCNTs) with the help of an atomic layer deposition (ALD) technique was successfully demonstrated. The utilization of pristine SWCNTs as a support resulted in a non-uniform deposition of ZnO in the form of nanoparticles. To achieve uniform ZnO coating, the SWCNTs first needed to be functionalized by treating the samples in a controlled ozone atmosphere. The uniformly ZnO coated SWCNTs were used to fabricate UV sensing devices. An UV irradiation of the ZnO coated samples turned them from hydrophobic to hydrophilic behaviour. Furthermore, thin films of the ZnO coated SWCNTs allowed us switch p-type field effect transistors made of pristine SWCNTs to have ambipolar characteristics.
Wu, Bulong; Luo, Xiaobing; Zheng, Huai; Liu, Sheng
2011-11-21
Gold wire bonding is an important packaging process of lighting emitting diode (LED). In this work, we studied the effect of gold wire bonding on the angular uniformity of correlated color temperature (CCT) in white LEDs whose phosphor layers were coated by freely dispersed coating process. Experimental study indicated that different gold wire bonding impacts the geometry of phosphor layer, and it results in different fluctuation trends of angular CCT at different spatial planes in one LED sample. It also results in various fluctuating amplitudes of angular CCT distributions at the same spatial plane for samples with different wire bonding angles. The gold wire bonding process has important impact on angular uniformity of CCT in LED package. © 2011 Optical Society of America
Chang, Fei; Xie, Yunchao; Chen, Juan; Luo, Jieru; Li, Chenlu; Hu, Xuefeng; Xu, Bin
2015-02-01
Preparation of uniform BiOCI flower-like microspheres was facilely accomplished through a sim- ple protocol involving regulation of pH value in aqueous with sodium hydroxide in the presence of n-propanol. The as-prepared samples were characterized by a collection of techniques, such as X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), energy dispersive X-ray spectroscopy (EDX), UV-vis diffuse reflectance spectroscopy (UV-vis DRS), and nitrogen adsorption-desorption isotherms. Based upon the SEM analyses, uniform microspheres could be formed with coexistence of some fragments of BiOCI nanosheets without n-propanol. The addition of appropriate amount of n-propanol was beneficial to provide BiOCI samples containing only flower-like microspheres, which were further subjected to the photocatalytic measurements towards Rhodamine B in aqueous under visible light irradiation and exhibited the best catalytic performance among all samples tested. In addition, the photocatalytic process was confirmed to undergo through a photosensitization pathway, in which superoxide radicals (.O-) played critical roles.
The hypergraph regularity method and its applications
Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.
2005-01-01
Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821
Duchoslav, Martin; Šafářová, Lenka; Krahulec, František
2010-01-01
Background and Aims Despite extensive study of polyploidy, its origin, and ecogeographical differences between polyploids and their diploid progenitors, few studies have addressed ploidy-level structure and patterns of ecogeographical differentiation at various spatial scales using detailed sampling procedures. The pattern of coexistence of polyploids in the geophyte Allium oleraceum at the landscape and locality scale and their ecology were studied. Methods Flow cytometry and root-tip squashes were used to identify the ploidy level of 4347 plants from 325 populations sampled from the Czech Republic using a stratified random sampling procedure. Ecological differentiation among ploidy levels was tested by comparing sets of environmental variables recorded at each locality. Key Results Across the entire sampling area, pentaploids (2n = 5x = 40) predominated, while hexaploids (2n = 6x = 48) and tetraploids (2n = 4x = 32) were less frequent. The distribution of tetra- and hexaploids was partially sympatric (in the eastern part) to parapatric (in the western part of the Czech Republic) whereas pentaploids were sympatric with other cytotypes. Plants of different ploidy levels were found to be ecologically differentiated and the ruderal character of cytotypes increased in the direction 4x → 5x → 6x with the largest realized niche differences between tetra- and hexaploids. Most populations contained only one ploidy level (77 %), 22 % had two (all possible combinations) and 1 % were composed of three ploidy levels. The majority of 4x + 5x and 5x + 6x mixed populations occurred in sympatry with uniform populations of the participating cytotypes in sites with ecologically heterogeneous or marginal environment, suggesting secondary contact between cytotypes. Some mixed 4x + 6x populations dominated by tetraploids being sympatric and intermixed with uniform 4x populations might represent primary zones of cytotype contact. Almost no mixed accessions were observed on the fine spatial scale in mixed populations. Conclusions The results provide evidence for adaptive differences among ploidy levels, which may contribute to their complex distribution pattern. The prevalence of asexual reproduction, limited dispersal and equilibrium-disrupting processes may support local coexistence of cytotypes. PMID:20363760
Industrial ion source technology
NASA Technical Reports Server (NTRS)
Kaufman, H. R.; Robinson, R. S.
1979-01-01
In reactive ion etching of Si, varying amounts of O2 were added to the CF4 background. The experimental results indicated an etch rate less than that for Ar up to an O2 partial pressure of about .00006 Torr. Above this O2 pressure, the etch rate with CF4 exceeded that with Ar alone. For comparison the random arrival rate of O2 was approximately equal to the ion arrival rate at a partial pressure of about .00002 Torr. There were also ion source and ion pressure gauge maintenance problems as a result of the use of CF4. Large scale (4 sq cm) texturing of Si was accomplished using both Cu and stainless steel seed. The most effective seeding method for this texturing was to surround the sample with large inclined planes. Designing, fabricating, and testing a 200 sq cm rectangular beam ion source was emphasized. The design current density was 6 mA/sq cm with 500 eV argon ions, although power supply limitations permitted operation to only 2 mA/sq cm. The use of multiple rectangular beam ion sources for continuous processing of wider areas than would be possible with a single source was also studied. In all cases investigated, the most uniform coverage was obtained with 0 to 2 cm beam overlay. The maximum departure from uniform processing at optimum beam overlap was found to be +15%.
Uniform Atmospheric Retrievals of Ultracool Late-T and Early-Y dwarfs
NASA Astrophysics Data System (ADS)
Garland, Ryan; Irwin, Patrick
2017-10-01
A significant number of ultracool (<600K) extrasolar objects have been discovered in the past decade thanks to wide-field surveys such as WISE. These objects present a perfect testbed for examining the evolution of atmospheric structure as we transition from typically hot extrasolar temperatures to the temperatures found within our Solar System.By examining these types of objects with a uniform retrieval method, we hope to elucidate any trends and (dis)similarities found in atmospheric parameters, such as chemical abundances, temperature-pressure profile, and cloud structure, for a sample of 7 ultracool brown dwarfs as we transition from hotter (~700K) to colder objects (~450K).We perform atmospheric retrievals on two late-T and five early-Y dwarfs. We use the NEMESIS atmospheric retrieval code coupled to a Nested Sampling algorithm, along with a standard uniform model for all of our retrievals. The uniform model assumes the atmosphere is described by a gray radiative-convective temperature profile, (optionally) a gray cloud, and a number of relevant gases. We first verify our methods by comparing it to a benchmark retrieval for Gliese 570D, which is found to be consistent. Furthermore, we present the retrieved gaseous composition, temperature structure, spectroscopic mass and radius, cloud structure and the trends associated with decreasing temperature found in this small sample of objects.
A probabilistic approach to randomness in geometric configuration of scalable origami structures
NASA Astrophysics Data System (ADS)
Liu, Ke; Paulino, Glaucio; Gardoni, Paolo
2015-03-01
Origami, an ancient paper folding art, has inspired many solutions to modern engineering challenges. The demand for actual engineering applications motivates further investigation in this field. Although rooted from the historic art form, many applications of origami are based on newly designed origami patterns to match the specific requirenments of an engineering problem. The application of origami to structural design problems ranges from micro-structure of materials to large scale deployable shells. For instance, some origami-inspired designs have unique properties such as negative Poisson ratio and flat foldability. However, origami structures are typically constrained by strict mathematical geometric relationships, which in reality, can be easily violated, due to, for example, random imperfections introduced during manufacturing, or non-uniform deformations under working conditions (e.g. due to non-uniform thermal effects). Therefore, the effects of uncertainties in origami-like structures need to be studied in further detail in order to provide a practical guide for scalable origami-inspired engineering designs. Through reliability and probabilistic analysis, we investigate the effect of randomness in origami structures on their mechanical properties. Dislocations of vertices of an origami structure have different impacts on different mechanical properties, and different origami designs could have different sensitivities to imperfections. Thus we aim to provide a preliminary understanding of the structural behavior of some common scalable origami structures subject to randomness in their geometric configurations in order to help transition the technology toward practical applications of origami engineering.
Pathak, Neha; Dodds, Julie; Khan, Khalid
2014-01-01
Objective To determine the accuracy of testing for human papillomavirus (HPV) DNA in urine in detecting cervical HPV in sexually active women. Design Systematic review and meta-analysis. Data sources Searches of electronic databases from inception until December 2013, checks of reference lists, manual searches of recent issues of relevant journals, and contact with experts. Eligibility criteria Test accuracy studies in sexually active women that compared detection of urine HPV DNA with detection of cervical HPV DNA. Data extraction and synthesis Data relating to patient characteristics, study context, risk of bias, and test accuracy. 2×2 tables were constructed and synthesised by bivariate mixed effects meta-analysis. Results 16 articles reporting on 14 studies (1443 women) were eligible for meta-analysis. Most used commercial polymerase chain reaction methods on first void urine samples. Urine detection of any HPV had a pooled sensitivity of 87% (95% confidence interval 78% to 92%) and specificity of 94% (95% confidence interval 82% to 98%). Urine detection of high risk HPV had a pooled sensitivity of 77% (68% to 84%) and specificity of 88% (58% to 97%). Urine detection of HPV 16 and 18 had a pooled sensitivity of 73% (56% to 86%) and specificity of 98% (91% to 100%). Metaregression revealed an increase in sensitivity when urine samples were collected as first void compared with random or midstream (P=0.004). Limitations The major limitations of this review are the lack of a strictly uniform method for the detection of HPV in urine and the variation in accuracy between individual studies. Conclusions Testing urine for HPV seems to have good accuracy for the detection of cervical HPV, and testing first void urine samples is more accurate than random or midstream sampling. When cervical HPV detection is considered difficult in particular subgroups, urine testing should be regarded as an acceptable alternative. PMID:25232064
Are randomly grown graphs really random?
Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H
2001-10-01
We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.
AN AUTOMATED SYSTEM FOR PRODUCING UNIFORM SURFACE DEPOSITS OF DRY PARTICLES
A laboratory system has been constructed that uniformly deposits dry particles onto any type of test surface. Devised as a quality assurance tool for the purpose of evaluating surface sampling methods for lead, it also may be used to generate test surfaces for any contaminant ...
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
Apparatus for synthesis of a solar spectrum
Sopori, Bhushan L.
1993-01-01
A xenon arc lamp and a tungsten filament lamp provide light beams that together contain all the wavelengths required to accurately simulate a solar spectrum. Suitable filter apparatus selectively direct visible and ultraviolet light from the xenon arc lamp into two legs of a trifurcated randomized fiber optic cable. Infrared light selectively filtered from the tungsten filament lamp is directed into the third leg of the fiber optic cable. The individual optic fibers from the three legs are brought together in a random fashion into a single output leg. The output beam emanating from the output leg of the trifurcated randomized fiber optic cable is extremely uniform and contains wavelengths from each of the individual filtered light beams. This uniform output beam passes through suitable collimation apparatus before striking the surface of the solar cell being tested. Adjustable aperture apparatus located between the lamps and the input legs of the trifurcated fiber optic cable can be selectively adjusted to limit the amount of light entering each leg, thereby providing a means of "fine tuning" or precisely adjusting the spectral content of the output beam. Finally, an adjustable aperture apparatus may also be placed in the output beam to adjust the intensity of the output beam without changing the spectral content and distribution of the output beam.
A neural algorithm for the non-uniform and adaptive sampling of biomedical data.
Mesin, Luca
2016-04-01
Body sensors are finding increasing applications in the self-monitoring for health-care and in the remote surveillance of sensitive people. The physiological data to be sampled can be non-stationary, with bursts of high amplitude and frequency content providing most information. Such data could be sampled efficiently with a non-uniform schedule that increases the sampling rate only during activity bursts. A real time and adaptive algorithm is proposed to select the sampling rate, in order to reduce the number of measured samples, but still recording the main information. The algorithm is based on a neural network which predicts the subsequent samples and their uncertainties, requiring a measurement only when the risk of the prediction is larger than a selectable threshold. Four examples of application to biomedical data are discussed: electromyogram, electrocardiogram, electroencephalogram, and body acceleration. Sampling rates are reduced under the Nyquist limit, still preserving an accurate representation of the data and of their power spectral densities (PSD). For example, sampling at 60% of the Nyquist frequency, the percentage average rectified errors in estimating the signals are on the order of 10% and the PSD is fairly represented, until the highest frequencies. The method outperforms both uniform sampling and compressive sensing applied to the same data. The discussed method allows to go beyond Nyquist limit, still preserving the information content of non-stationary biomedical signals. It could find applications in body sensor networks to lower the number of wireless communications (saving sensor power) and to reduce the occupation of memory. Copyright © 2016 Elsevier Ltd. All rights reserved.
Carbon nanotube bundles with tensile strength over 80 GPa.
Bai, Yunxiang; Zhang, Rufan; Ye, Xuan; Zhu, Zhenxing; Xie, Huanhuan; Shen, Boyuan; Cai, Dali; Liu, Bofei; Zhang, Chenxi; Jia, Zhao; Zhang, Shenli; Li, Xide; Wei, Fei
2018-05-14
Carbon nanotubes (CNTs) are one of the strongest known materials. When assembled into fibres, however, their strength becomes impaired by defects, impurities, random orientations and discontinuous lengths. Fabricating CNT fibres with strength reaching that of a single CNT has been an enduring challenge. Here, we demonstrate the fabrication of CNT bundles (CNTBs) that are centimetres long with tensile strength over 80 GPa using ultralong defect-free CNTs. The tensile strength of CNTBs is controlled by the Daniels effect owing to the non-uniformity of the initial strains in the components. We propose a synchronous tightening and relaxing strategy to release these non-uniform initial strains. The fabricated CNTBs, consisting of a large number of components with parallel alignment, defect-free structures, continuous lengths and uniform initial strains, exhibit a tensile strength of 80 GPa (corresponding to an engineering tensile strength of 43 GPa), which is far higher than that of any other strong fibre.
Howland, Shanshan W; Poh, Chek-Meng; Rénia, Laurent
2011-09-01
Directional cloning of complementary DNA (cDNA) primed by oligo(dT) is commonly achieved by appending a restriction site to the primer, whereas the second strand is synthesized through the combined action of RNase H and Escherichia coli DNA polymerase I (PolI). Although random primers provide more uniform and complete coverage, directional cloning with the same strategy is highly inefficient. We report that phosphorothioate linkages protect the tail sequence appended to random primers from the 5'→3' exonuclease activity of PolI. We present a simple strategy for constructing a random-primed cDNA library using the efficient, size-independent, and seamless In-Fusion cloning method instead of restriction enzymes. Copyright © 2011 Elsevier Inc. All rights reserved.
Impact of deposition-rate fluctuations on thin-film thickness and uniformity
Oliver, Joli B.
2016-11-04
Variations in deposition rate are superimposed on a thin-film–deposition model with planetary rotation to determine the impact on film thickness. Variations in magnitude and frequency of the fluctuations relative to the speed of planetary revolution lead to thickness errors and uniformity variations up to 3%. Sufficiently rapid oscillations in the deposition rate have a negligible impact, while slow oscillations are found to be problematic, leading to changes in the nominal film thickness. Finally, superimposing noise as random fluctuations in the deposition rate has a negligible impact, confirming the importance of any underlying harmonic oscillations in deposition rate or source operation.
Semiconductor laser insert with uniform illumination for use in photodynamic therapy
NASA Astrophysics Data System (ADS)
Charamisinau, Ivan; Happawana, Gemunu; Evans, Gary; Rosen, Arye; Hsi, Richard A.; Bour, David
2005-08-01
A low-cost semiconductor red laser light delivery system for esophagus cancer treatment is presented. The system is small enough for insertion into the patient's body. Scattering elements with nanoscale particles are used to achieve uniform illumination. The scattering element optimization calculations, with Mie theory, provide scattering and absorption efficiency factors for scattering particles composed of various materials. The possibility of using randomly deformed spheres and composite particles instead of perfect spheres is analyzed using an extension to Mie theory. The measured radiation pattern from a prototype light delivery system fabricated using these design criteria shows reasonable agreement with the theoretically predicted pattern.
Severity of Organized Item Theft in Computerized Adaptive Testing: A Simulation Study
ERIC Educational Resources Information Center
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua
2008-01-01
Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…
Measuring Symmetry, Asymmetry and Randomness in Neural Network Connectivity
Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni
2014-01-01
Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity. PMID:25006663
Measuring symmetry, asymmetry and randomness in neural network connectivity.
Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni
2014-01-01
Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-10-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as thermal-optical reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier transform infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive and nondestructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FT-IR spectra are divided into calibration and test sets. Two calibrations are developed: one developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a uniform distribution of Low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the Low EC calibration to Low EC samples and the Uniform EC calibration to all other samples is used to produce predictions for Low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), no bias (0.00 μg m-3, a concentration value based on the nominal IMPROVE sample volume of 32.8 m3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples, providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-06-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as Thermal-Optical Reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier Transform Infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure tested and developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FTIR spectra are divided into calibration and test sets. Two calibrations are developed, one which is developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a~uniform distribution of low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the low EC calibration to low EC samples and the Uniform EC calibration to all other samples is used to produces predictions for low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of variation (R2; 0.96), no bias (0.00 μg m-3, concentration value based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples; providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter (OM) estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Uniform neural tissue models produced on synthetic hydrogels using standard culture techniques.
Barry, Christopher; Schmitz, Matthew T; Propson, Nicholas E; Hou, Zhonggang; Zhang, Jue; Nguyen, Bao K; Bolin, Jennifer M; Jiang, Peng; McIntosh, Brian E; Probasco, Mitchell D; Swanson, Scott; Stewart, Ron; Thomson, James A; Schwartz, Michael P; Murphy, William L
2017-11-01
The aim of the present study was to test sample reproducibility for model neural tissues formed on synthetic hydrogels. Human embryonic stem (ES) cell-derived precursor cells were cultured on synthetic poly(ethylene glycol) (PEG) hydrogels to promote differentiation and self-organization into model neural tissue constructs. Neural progenitor, vascular, and microglial precursor cells were combined on PEG hydrogels to mimic developmental timing, which produced multicomponent neural constructs with 3D neuronal and glial organization, organized vascular networks, and microglia with ramified morphologies. Spearman's rank correlation analysis of global gene expression profiles and a comparison of coefficient of variation for expressed genes demonstrated that replicate neural constructs were highly uniform to at least day 21 for samples from independent experiments. We also demonstrate that model neural tissues formed on PEG hydrogels using a simplified neural differentiation protocol correlated more strongly to in vivo brain development than samples cultured on tissue culture polystyrene surfaces alone. These results provide a proof-of-concept demonstration that 3D cellular models that mimic aspects of human brain development can be produced from human pluripotent stem cells with high sample uniformity between experiments by using standard culture techniques, cryopreserved cell stocks, and a synthetic extracellular matrix. Impact statement Pluripotent stem (PS) cells have been characterized by an inherent ability to self-organize into 3D "organoids" resembling stomach, intestine, liver, kidney, and brain tissues, offering a potentially powerful tool for modeling human development and disease. However, organoid formation must be quantitatively reproducible for applications such as drug and toxicity screening. Here, we report a strategy to produce uniform neural tissue constructs with reproducible global gene expression profiles for replicate samples from multiple experiments.
The random energy model in a magnetic field and joint source channel coding
NASA Astrophysics Data System (ADS)
Merhav, Neri
2008-09-01
We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.
Exact Markov chains versus diffusion theory for haploid random mating.
Tyvand, Peder A; Thorvaldsen, Steinar
2010-05-01
Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Markman, Adam; Carnicer, Artur; Javidi, Bahram
2017-05-01
We overview our recent work [1] on utilizing three-dimensional (3D) optical phase codes for object authentication using the random forest classifier. A simple 3D optical phase code (OPC) is generated by combining multiple diffusers and glass slides. This tag is then placed on a quick-response (QR) code, which is a barcode capable of storing information and can be scanned under non-uniform illumination conditions, rotation, and slight degradation. A coherent light source illuminates the OPC and the transmitted light is captured by a CCD to record the unique signature. Feature extraction on the signature is performed and inputted into a pre-trained random-forest classifier for authentication.
Light sheet theta microscopy for rapid high-resolution imaging of large biological samples.
Migliori, Bianca; Datta, Malika S; Dupre, Christophe; Apak, Mehmet C; Asano, Shoh; Gao, Ruixuan; Boyden, Edward S; Hermanson, Ola; Yuste, Rafael; Tomer, Raju
2018-05-29
Advances in tissue clearing and molecular labeling methods are enabling unprecedented optical access to large intact biological systems. These developments fuel the need for high-speed microscopy approaches to image large samples quantitatively and at high resolution. While light sheet microscopy (LSM), with its high planar imaging speed and low photo-bleaching, can be effective, scaling up to larger imaging volumes has been hindered by the use of orthogonal light sheet illumination. To address this fundamental limitation, we have developed light sheet theta microscopy (LSTM), which uniformly illuminates samples from the same side as the detection objective, thereby eliminating limits on lateral dimensions without sacrificing the imaging resolution, depth, and speed. We present a detailed characterization of LSTM, and demonstrate its complementary advantages over LSM for rapid high-resolution quantitative imaging of large intact samples with high uniform quality. The reported LSTM approach is a significant step for the rapid high-resolution quantitative mapping of the structure and function of very large biological systems, such as a clarified thick coronal slab of human brain and uniformly expanded tissues, and also for rapid volumetric calcium imaging of highly motile animals, such as Hydra, undergoing non-isomorphic body shape changes.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Large-area synthesis of high-quality and uniform monolayer WS2 on reusable Au foils
Gao, Yang; Liu, Zhibo; Sun, Dong-Ming; Huang, Le; Ma, Lai-Peng; Yin, Li-Chang; Ma, Teng; Zhang, Zhiyong; Ma, Xiu-Liang; Peng, Lian-Mao; Cheng, Hui-Ming; Ren, Wencai
2015-01-01
Large-area monolayer WS2 is a desirable material for applications in next-generation electronics and optoelectronics. However, the chemical vapour deposition (CVD) with rigid and inert substrates for large-area sample growth suffers from a non-uniform number of layers, small domain size and many defects, and is not compatible with the fabrication process of flexible devices. Here we report the self-limited catalytic surface growth of uniform monolayer WS2 single crystals of millimetre size and large-area films by ambient-pressure CVD on Au. The weak interaction between the WS2 and Au enables the intact transfer of the monolayers to arbitrary substrates using the electrochemical bubbling method without sacrificing Au. The WS2 shows high crystal quality and optical and electrical properties comparable or superior to mechanically exfoliated samples. We also demonstrate the roll-to-roll/bubbling production of large-area flexible films of uniform monolayer, double-layer WS2 and WS2/graphene heterostructures, and batch fabrication of large-area flexible monolayer WS2 film transistor arrays. PMID:26450174
PERIODOGRAMS FOR MULTIBAND ASTRONOMICAL TIME SERIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
VanderPlas, Jacob T.; Ivezic, Željko
This paper introduces the multiband periodogram, a general extension of the well-known Lomb–Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb–Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common tomore » all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.« less
Hervella, Montserrat; Izagirre, Neskuts; Alonso, Santos; Fregel, Rosa; Alonso, Antonio; Cabrera, Vicente M.; de la Rúa, Concepción
2012-01-01
Background/Principal Findings The phenomenon of Neolithisation refers to the transition of prehistoric populations from a hunter-gatherer to an agro-pastoralist lifestyle. Traditionally, the spread of an agro-pastoralist economy into Europe has been framed within a dichotomy based either on an acculturation phenomenon or on a demic diffusion. However, the nature and speed of this transition is a matter of continuing scientific debate in archaeology, anthropology, and human population genetics. In the present study, we have analyzed the mitochondrial DNA diversity in hunter-gatherers and first farmers from Northern Spain, in relation to the debate surrounding the phenomenon of Neolithisation in Europe. Methodology/Significance Analysis of mitochondrial DNA was carried out on 54 individuals from Upper Paleolithic and Early Neolithic, which were recovered from nine archaeological sites from Northern Spain (Basque Country, Navarre and Cantabria). In addition, to take all necessary precautions to avoid contamination, different authentication criteria were applied in this study, including: DNA quantification, cloning, duplication (51% of the samples) and replication of the results (43% of the samples) by two independent laboratories. Statistical and multivariate analyses of the mitochondrial variability suggest that the genetic influence of Neolithisation did not spread uniformly throughout Europe, producing heterogeneous genetic consequences in different geographical regions, rejecting the traditional models that explain the Neolithisation in Europe. Conclusion The differences detected in the mitochondrial DNA lineages of Neolithic groups studied so far (including these ones of this study) suggest different genetic impact of Neolithic in Central Europe, Mediterranean Europe and the Cantabrian fringe. The genetic data obtained in this study provide support for a random dispersion model for Neolithic farmers. This random dispersion had a different impact on the various geographic regions, and thus contradicts the more simplistic total acculturation and replacement models proposed so far to explain Neolithisation. PMID:22563371
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Why sampling scheme matters: the effect of sampling scheme on landscape genetic results
Michael K. Schwartz; Kevin S. McKelvey
2008-01-01
There has been a recent trend in genetic studies of wild populations where researchers have changed their sampling schemes from sampling pre-defined populations to sampling individuals uniformly across landscapes. This reflects the fact that many species under study are continuously distributed rather than clumped into obvious "populations". Once individual...
NASA Astrophysics Data System (ADS)
Qiang, Wei
2011-12-01
We describe a sampling scheme for the two-dimensional (2D) solid state NMR experiments, which can be readily applied to the sensitivity-limited samples. The sampling scheme utilizes continuous, non-uniform sampling profile for the indirect dimension, i.e. the acquisition number decreases as a function of the evolution time ( t1) in the indirect dimension. For a beta amyloid (Aβ) fibril sample, we observed overall 40-50% signal enhancement by measuring the cross peak volume, while the cross peak linewidths remained comparable to the linewidths obtained by regular sampling and processing strategies. Both the linear and Gaussian decay functions for the acquisition numbers result in similar percentage of increment in signal. In addition, we demonstrated that this sampling approach can be applied with different dipolar recoupling approaches such as radiofrequency assisted diffusion (RAD) and finite-pulse radio-frequency-driven recoupling (fpRFDR). This sampling scheme is especially suitable for the sensitivity-limited samples which require long signal averaging for each t1 point, for instance the biological membrane proteins where only a small fraction of the sample is isotopically labeled.
NASA Astrophysics Data System (ADS)
Cottle, J.’Neil; Covey, Kevin R.; Suárez, Genaro; Román-Zúñiga, Carlos; Schlafly, Edward; Downes, Juan Jose; Ybarra, Jason E.; Hernandez, Jesus; Stassun, Keivan; Stringfellow, Guy S.; Getman, Konstantin; Feigelson, Eric; Borissova, Jura; Kim, J. Serena; Roman-Lopes, A.; Da Rio, Nicola; De Lee, Nathan; Frinchaboy, Peter M.; Kounkel, Marina; Majewski, Steven R.; Mennickent, Ronald E.; Nidever, David L.; Nitschelm, Christian; Pan, Kaike; Shetrone, Matthew; Zasowski, Gail; Chambers, Ken; Magnier, Eugene; Valenti, Jeff
2018-06-01
The Orion Star-forming Complex (OSFC) is a central target for the APOGEE-2 Young Cluster Survey. Existing membership catalogs span limited portions of the OSFC, reflecting the difficulty of selecting targets homogeneously across this extended, highly structured region. We have used data from wide-field photometric surveys to produce a less biased parent sample of young stellar objects (YSOs) with infrared (IR) excesses indicative of warm circumstellar material or photometric variability at optical wavelengths across the full 420 square degree extent of the OSFC. When restricted to YSO candidates with H < 12.4, to ensure S/N ∼ 100 for a six-visit source, this uniformly selected sample includes 1307 IR excess sources selected using criteria vetted by Koenig & Liesawitz (2014) and 990 optical variables identified in the Pan-STARRS1 3π survey: 319 sources exhibit both optical variability and evidence of circumstellar disks through IR excess. Objects from this uniformly selected sample received the highest priority for targeting, but required fewer than half of the fibers on each APOGEE-2 plate. We filled the remaining fibers with previously confirmed and new color–magnitude selected candidate OSFC members. Radial velocity measurements from APOGEE-1 and new APOGEE-2 observations taken in the survey’s first year indicate that ∼90% of the uniformly selected targets have radial velocities consistent with Orion membership. The APOGEE-2 Orion survey will include >1100 bona fide YSOs whose uniform selection function will provide a robust sample for comparative analyses of the stellar populations and properties across all sub-regions of Orion.
Uniform Atmospheric Retrievals of Ultracool Late-T and Early-Y dwarfs
NASA Astrophysics Data System (ADS)
Garland, Ryan; Irwin, Patrick
2018-01-01
A significant number of ultracool (<600K) extrasolar objects have been unearthed in the past decade thanks to wide-field surveys such as WISE. These objects present a perfect testbed for examining the evolution of atmospheric structure as we transition from typically hot extrasolar temperatures to the temperatures found within our Solar System.By examining these types of objects with a uniform retrieval method, we hope to elucidate any trends and (dis)similarities found in atmospheric parameters, such as chemical abundances, temperature-pressure profile, and cloud structure, for a sample of 7 ultracool brown dwarfs as we transition from hotter (~700K) to colder objects (~450K).We perform atmospheric retrievals on two late-T and five early-Y dwarfs. We use the NEMESIS atmospheric retrieval code coupled to a Nested Sampling algorithm, along with a standard uniform model for all of our retrievals. The uniform model assumes the atmosphere is described by a gray radiative-convective temperature profile, (optionally) a self-consistent Mie scattering cloud, and a number of relevant gases. We first verify our methods by comparing it to a benchmark retrieval for Gliese 570D, which is found to be consistent. Furthermore, we present the retrieved gaseous composition, temperature structure, spectroscopic mass and radius, cloud structure and the trends associated with decreasing temperature found in this small sample of objects.
Nonuniform sampling theorems for random signals in the linear canonical transform domain
NASA Astrophysics Data System (ADS)
Shuiqing, Xu; Congmei, Jiang; Yi, Chai; Youqiang, Hu; Lei, Huang
2018-06-01
Nonuniform sampling can be encountered in various practical processes because of random events or poor timebase. The analysis and applications of the nonuniform sampling for deterministic signals related to the linear canonical transform (LCT) have been well considered and researched, but up to now no papers have been published regarding the various nonuniform sampling theorems for random signals related to the LCT. The aim of this article is to explore the nonuniform sampling and reconstruction of random signals associated with the LCT. First, some special nonuniform sampling models are briefly introduced. Second, based on these models, some reconstruction theorems for random signals from various nonuniform samples associated with the LCT have been derived. Finally, the simulation results are made to prove the accuracy of the sampling theorems. In addition, the latent real practices of the nonuniform sampling for random signals have been also discussed.
Improved sensitivity via layered-double-hydroxide-uniformity-dependent chemiluminescence.
Li, Zenghe; Wang, Dan; Yuan, Zhiqin; Lu, Chao
2016-12-01
In the last two decades nanoparticles have been widely applied to enhance chemiluminescence (CL). The morphology of nanoparticles has an important influence on nanoparticle-amplified CL. However, studies of nanoparticle-amplified CL focus mainly on the size and shape effects, and no attempt has been made to explore the influence of uniformity in nanoparticle-amplified CL processes. In this study we have investigated nanoparticle uniformity in the luminol-H 2 O 2 CL system using layered double hydroxides (LDHs) as a model material. The results demonstrated that the uniformity of LDHs played a key role in CL amplification. A possible mechanism is that LDHs with high uniformity possess abundant catalytic active sites, which results in high CL intensity. Meanwhile, the sensitivity for H 2 O 2 detection was increased by one order of magnitude (1.0 nM). Moreover, the uniform-LDH-amplified luminol CL could be applied to selective detection of glucose in human plasma samples. Furthermore, such a uniformity-dependent CL enhancement effect could adapted to other redox CL systems-for example, the peroxynitrous acid (ONOOH) CL system.
Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-05-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.
Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event
Strydom, Gerhard
2013-01-01
The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, V. V.; Fischer, P. J.; Chan, E. R.
We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope's MTF, tests with the BPRML sample can be used to fine tune the instrument's focal distance. Finally, our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, V. V., E-mail: VVYashchuk@lbl.gov; Chan, E. R.; Lacey, I.
We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope’s MTF, tests with the BPRML sample can be used to fine tune the instrument’s focal distance. Our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less
NASA Astrophysics Data System (ADS)
Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.
2016-12-01
Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.
NASA Astrophysics Data System (ADS)
Angrisano, Antonio; Maratea, Antonio; Gaglione, Salvatore
2018-01-01
In the absence of obstacles, a GPS device is generally able to provide continuous and accurate estimates of position, while in urban scenarios buildings can generate multipath and echo-only phenomena that severely affect the continuity and the accuracy of the provided estimates. Receiver autonomous integrity monitoring (RAIM) techniques are able to reduce the negative consequences of large blunders in urban scenarios, but require both a good redundancy and a low contamination to be effective. In this paper a resampling strategy based on bootstrap is proposed as an alternative to RAIM, in order to estimate accurately position in case of low redundancy and multiple blunders: starting with the pseudorange measurement model, at each epoch the available measurements are bootstrapped—that is random sampled with replacement—and the generated a posteriori empirical distribution is exploited to derive the final position. Compared to standard bootstrap, in this paper the sampling probabilities are not uniform, but vary according to an indicator of the measurement quality. The proposed method has been compared with two different RAIM techniques on a data set collected in critical conditions, resulting in a clear improvement on all considered figures of merit.
Synchronization in oscillator networks with delayed coupling: a stability criterion.
Earl, Matthew G; Strogatz, Steven H
2003-03-01
We derive a stability criterion for the synchronous state in networks of identical phase oscillators with delayed coupling. The criterion applies to any network (whether regular or random, low dimensional or high dimensional, directed or undirected) in which each oscillator receives delayed signals from k others, where k is uniform for all oscillators.
Probabilistic pathway construction.
Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha
2011-07-01
Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
NASA Astrophysics Data System (ADS)
Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin
2018-04-01
The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.
NASA Technical Reports Server (NTRS)
Hilbert, Kent; Pagnutti, Mary; Ryan, Robert; Zanoni, Vicki
2002-01-01
This paper discusses a method for detecting spatially uniform sites need for radiometric characterization of remote sensing satellites. Such information is critical for scientific research applications of imagery having moderate to high resolutions (<30-m ground sampling distance (GSD)). Previously published literature indicated that areas with the African Saharan and Arabian deserts contained extremely uniform sites with respect to spatial characteristics. We developed an algorithm for detecting site uniformity and applied it to orthorectified Landsat Thematic Mapper (TM) imagery over eight uniform regions of interest. The algorithm's results were assessed using both medium-resolution (30-m GSD) Landsat 7 ETM+ and fine-resolution (<5-m GSD) IKONOS multispectral data collected over sites in Libya and Mali. Fine-resolution imagery over a Libyan site exhibited less than 1 percent nonuniformity. The research shows that Landsat TM products appear highly useful for detecting potential calibration sites for system characterization. In particular, the approach detected spatially uniform regions that frequently occur at multiple scales of observation.
Alsulays, Bader B; Fayed, Mohamed H; Alalaiwe, Ahmed; Alshahrani, Saad M; Alshetaili, Abdullah S; Alshehri, Sultan M; Alanazi, Fars K
2018-05-16
The objective of this study was to examine the influence of drug amount and mixing time on the homogeneity and content uniformity of a low-dose drug formulation during the dry mixing step using a new gentle-wing high-shear mixer. Moreover, the study investigated the influence of drug incorporation mode on the content uniformity of tablets manufactured by different methods. Albuterol sulfate was selected as a model drug and was blended with the other excipients at two different levels, 1% w/w and 5% w/w at impeller speed of 300 rpm and chopper speed of 3000 rpm for 30 min. Utilizing a 1 ml unit side-sampling thief probe, triplicate samples were taken from nine different positions in the mixer bowl at selected time points. Two methods were used for manufacturing of tablets, direct compression and wet granulation. The produced tablets were sampled at the beginning, middle, and end of the compression cycle. An analysis of variance analysis indicated the significant effect (p < .05) of drug amount on the content uniformity of the powder blend and the corresponding tablets. For 1% w/w and 5% w/w formulations, incorporation of the drug in the granulating fluid provided tablets with excellent content uniformity and very low relative standard deviation (∼0.61%) during the whole tableting cycle compared to direct compression and granulation method with dry incorporation mode of the drug. Overall, gentle-wing mixer is a good candidate for mixing of low-dose cohesive drug and provides tablets with acceptable content uniformity with no need for pre-blending step.
Non-uniform sampling: post-Fourier era of NMR data collection and processing.
Kazimierczuk, Krzysztof; Orekhov, Vladislav
2015-11-01
The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters).
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-07
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Fredenberg, E.; Lundqvist, Mats; Siewerdsen, J. H.
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging. PMID:26611740
The Ciliate Paramecium Shows Higher Motility in Non-Uniform Chemical Landscapes
Giuffre, Carl; Hinow, Peter; Vogel, Ryan; Ahmed, Tanvir; Stocker, Roman; Consi, Thomas R.; Strickler, J. Rudi
2011-01-01
We study the motility behavior of the unicellular protozoan Paramecium tetraurelia in a microfluidic device that can be prepared with a landscape of attracting or repelling chemicals. We investigate the spatial distribution of the positions of the individuals at different time points with methods from spatial statistics and Poisson random point fields. This makes quantitative the informal notion of “uniform distribution” (or lack thereof). Our device is characterized by the absence of large systematic biases due to gravitation and fluid flow. It has the potential to be applied to the study of other aquatic chemosensitive organisms as well. This may result in better diagnostic devices for environmental pollutants. PMID:21494596
The uniform quantized electron gas revisited
NASA Astrophysics Data System (ADS)
Lomba, Enrique; Høye, Johan S.
2017-11-01
In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.
NASA Astrophysics Data System (ADS)
Dadras, Sedigheh; Davoudiniya, Masoumeh
2018-05-01
This paper sets out to investigate and compare the effects of Ag nanoparticles and carbon nanotubes (CNTs) doping on the mechanical properties of Y1Ba2Cu3O7-δ (YBCO) high temperature superconductor. For this purpose, the pure and doped YBCO samples were synthesized by sol-gel method. The microstructural analysis of the samples is performed using X-ray diffraction (XRD). The crystalline size, lattice strain and stress of the pure and doped YBCO samples were estimated by modified forms of Williamson-Hall analysis (W-H), namely, uniform deformation model (UDM), uniform deformation stress model (UDSM) and the size-strain plot method (SSP). These results show that the crystalline size, lattice strain and stress of the YBCO samples declined by Ag nanoparticles and CNTs doping.
NASA Astrophysics Data System (ADS)
Yang, Chun; Quarles, C. A.
2007-10-01
We have used positron Doppler Broadening Spectroscopy (DBS) to investigate the uniformity of rubber-carbon black composite samples. The amount of carbon black added to a rubber sample is characterized by phr, the number of grams of carbon black per hundred grams of rubber. Typical concentrations in rubber tires are 50 phr. It has been shown that the S parameter measured by DBS depends on the phr of the sample, so the variation in carbon black concentration can be easily measured to 0.5 phr. In doing the experiments we observed a dependence of the S parameter on small variation in the counting rate or deadtime. By carefully calibrating this deadtime correction we can significantly reduce the experimental run time and thus make faster determination of the uniformity of extended samples.
Molecularly uniform poly(ethylene glycol) certified reference material
NASA Astrophysics Data System (ADS)
Takahashi, Kayori; Matsuyama, Shigetomo; Kinugasa, Shinichi; Ehara, Kensei; Sakurai, Hiromu; Horikawa, Yoshiteru; Kitazawa, Hideaki; Bounoshita, Masao
2015-02-01
A certified reference material (CRM) for poly(ethylene glycol) with no distribution in the degree of polymerization was developed. The degree of polymerization of the CRM was accurately determined to be 23. Supercritical fluid chromatography (SFC) was used to separate the molecularly uniform polymer from a standard commercial sample with wide polydispersity in its degree of polymerization. Through the use of a specific fractionation system coupled with SFC, we are able to obtain samples of poly(ethylene glycol) oligomer with exact degrees of polymerization, as required for a CRM produced by the National Metrology Institute of Japan.
Pre and post annealed low cost ZnO nanorods on seeded substrate
NASA Astrophysics Data System (ADS)
Nordin, M. N.; Kamil, Wan Maryam Wan Ahmad
2017-05-01
We wish to report the photonic band gap (where light is confined) in low cost ZnO nanorods created by two-step chemical bath deposition (CBD) method where the glass substrates were pre-treated with two different seeding thicknesses, 100 nm (sample a) and 150 nm (sample b), of ZnO using radio frequency magnetron sputtering. Then the samples were annealed at 600°C for 1 hour in air before and after immersed into the chemical solution for CBD process. To observe the presence of photonic band gap on the sample, UV-Visible-NIR spectrophotometer was utilized and showed that sample a and sample b both achieved wide band gap between 240 nm and 380 nm, within the UV range for typical ZnO, however sample b provided a better light confinement that may be attributed by the difference in average nanorods size. Field Emission Scanning Electron Microscope (FESEM) of the samples revealed better oriented nanorods uniformly scattered across the surface when substrates were coated with 100 nm of seeding layer whilst the 150 nm seeding sample showed a poor distribution of nanorods probably due to defects in the sample. Finally, the crystal structure of the ZnO crystallite is revealed by employing X-ray diffraction and both samples showed polycrystalline with hexagonal wurtzite structure that matched with JCPDS No. 36-1451. The 100 nm pre-seeded samples was recognized to have bigger average crystallite size, however sample b was suggested as having a higher crystalline quality. In conclusion, the sample b is recognized as a better candidate for future photonic applications due to its more apparent of photonic band gap and this may be contributed by more random distribution of the nanorods as observed in FESEM images as well as higher crystalline quality as suggested from XRD measurements.
Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning
2014-12-10
Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.
Stochastic species abundance models involving special copulas
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2018-01-01
Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.
Local Neighbourhoods for First-Passage Percolation on the Configuration Model
NASA Astrophysics Data System (ADS)
Dereich, Steffen; Ortgiese, Marcel
2018-04-01
We consider first-passage percolation on the configuration model. Once the network has been generated each edge is assigned an i.i.d. weight modeling the passage time of a message along this edge. Then independently two vertices are chosen uniformly at random, a sender and a recipient, and all edges along the geodesic connecting the two vertices are coloured in red (in the case that both vertices are in the same component). In this article we prove local limit theorems for the coloured graph around the recipient in the spirit of Benjamini and Schramm. We consider the explosive regime, in which case the random distances are of finite order, and the Malthusian regime, in which case the random distances are of logarithmic order.
Multiphase contrast medium injection for optimization of computed tomographic coronary angiography.
Budoff, Matthew Jay; Shinbane, Jerold S; Child, Janis; Carson, Sivi; Chau, Alex; Liu, Stephen H; Mao, SongShou
2006-02-01
Electron beam angiography is a minimally invasive imaging technique. Adequate vascular opacification throughout the study remains a critical issue for image quality. We hypothesized that vascular image opacification and uniformity of vascular enhancement between slices can be improved using multiphase contrast medium injection protocols. We enrolled 244 consecutive patients who were randomized to three different injection protocols: single-phase contrast medium injection (Group 1), dual-phase contrast medium injection with each phase at a different injection rate (Group 2), and a three-phase injection with two phases of contrast medium injection followed by a saline injection phase (Group 3). Parameters measured were aortic opacification based on Hounsfield units and uniformity of aortic enhancement at predetermined slices (locations from top [level 1] to base [level 60]). In Group 1, contrast opacification differed across seven predetermined locations (scan levels: 1st versus 60th, P < .05), demonstrating significant nonuniformity. In Group 2, there was more uniform vascular enhancement, with no significant differences between the first 50 slices (P > .05). In Group 3, there was greater uniformity of vascular enhancement and higher mean Hounsfield units value across all 60 images, from the aortic root to the base of the heart (P < .05). The three-phase injection protocol improved vascular opacification at the base of the heart, as well as uniformity of arterial enhancement throughout the study.
Formation and evolution of magnetised filaments in wind-swept turbulent clumps
NASA Astrophysics Data System (ADS)
Banda-Barragan, Wladimir Eduardo; Federrath, Christoph; Crocker, Roland M.; Bicknell, Geoffrey Vincent; Parkin, Elliot Ross
2015-08-01
Using high-resolution three-dimensional simulations, we examine the formation and evolution of filamentary structures arising from magnetohydrodynamic interactions between supersonic winds and turbulent clumps in the interstellar medium. Previous numerical studies assumed homogenous density profiles, null velocity fields, and uniformly distributed magnetic fields as the initial conditions for interstellar clumps. Here, we have, for the first time, incorporated fractal clumps with log-normal density distributions, random velocity fields and turbulent magnetic fields (superimposed on top of a uniform background field). Disruptive processes, instigated by dynamical instabilities and akin to those observed in simulations with uniform media, lead to stripping of clump material and the subsequent formation of filamentary tails. The evolution of filaments in uniform and turbulent models is, however, radically different as evidenced by comparisons of global quantities in both scenarios. We show, for example, that turbulent clumps produce tails with higher velocity dispersions, increased gas mixing, greater kinetic energy, and lower plasma beta than their uniform counterparts. We attribute the observed differences to: 1) the turbulence-driven enhanced growth of dynamical instabilities (e.g. Kelvin-Helmholtz and Rayleigh-Taylor instabilities) at fluid interfaces, and 2) the localised amplification of magnetic fields caused by the stretching of field lines trapped in the numerous surface deformations of fractal clumps. We briefly discuss the implications of this work to the physics of the optical filaments observed in the starburst galaxy M82.
Introducing sampling entropy in repository based adaptive umbrella sampling
NASA Astrophysics Data System (ADS)
Zheng, Han; Zhang, Yingkai
2009-12-01
Determining free energy surfaces along chosen reaction coordinates is a common and important task in simulating complex systems. Due to the complexity of energy landscapes and the existence of high barriers, one widely pursued objective to develop efficient simulation methods is to achieve uniform sampling among thermodynamic states of interest. In this work, we have demonstrated sampling entropy (SE) as an excellent indicator for uniform sampling as well as for the convergence of free energy simulations. By introducing SE and the concentration theorem into the biasing-potential-updating scheme, we have further improved the adaptivity, robustness, and applicability of our recently developed repository based adaptive umbrella sampling (RBAUS) approach [H. Zheng and Y. Zhang, J. Chem. Phys. 128, 204106 (2008)]. Besides simulations of one dimensional free energy profiles for various systems, the generality and efficiency of this new RBAUS-SE approach have been further demonstrated by determining two dimensional free energy surfaces for the alanine dipeptide in gas phase as well as in water.
Higo, Junichi; Dasgupta, Bhaskar; Mashimo, Tadaaki; Kasahara, Kota; Fukunishi, Yoshifumi; Nakamura, Haruki
2015-07-30
A novel enhanced conformational sampling method, virtual-system-coupled adaptive umbrella sampling (V-AUS), was proposed to compute 300-K free-energy landscape for flexible molecular docking, where a virtual degrees of freedom was introduced to control the sampling. This degree of freedom interacts with the biomolecular system. V-AUS was applied to complex formation of two disordered amyloid-β (Aβ30-35 ) peptides in a periodic box filled by an explicit solvent. An interpeptide distance was defined as the reaction coordinate, along which sampling was enhanced. A uniform conformational distribution was obtained covering a wide interpeptide distance ranging from the bound to unbound states. The 300-K free-energy landscape was characterized by thermodynamically stable basins of antiparallel and parallel β-sheet complexes and some other complex forms. Helices were frequently observed, when the two peptides contacted loosely or fluctuated freely without interpeptide contacts. We observed that V-AUS converged to uniform distribution more effectively than conventional AUS sampling did. © 2015 Wiley Periodicals, Inc.
Measurement of refractive index of photopolymer for holographic gratings
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Mizuno, Jun; Fujikawa, Chiemi; Kodate, Kashiko
2007-02-01
We have made attempts to measure directly the small-scale variation of optical path lengths in photopolymer samples. For those with uniform thickness, the measured quantity is supposed to be proportional to the refractive index of the photopolymer. The system is based on a Mach-Zehnder interferometer using phase-locking technique and measures the change in optical path length during the sample is scanned across the optical axis. The spatial resolution is estimated to be 2μm, which is limited by the sample thickness. The path length resolution is estimated to be 6nm, which corresponds to the change in refractive index less than 10 -3 for the sample of 10μm thick. The measurement results showed clearly that the refractive index of photopolymer is not simply proportional to the exposure energy, contrary to the conventional photosensitive materials such as silver halide emulsion and dichromated gelatine. They also revealed the refractive index fluctuation in uniformly exposed photopolymer sample, which explains the milky appearance that sometimes observed in thick samples.
2010-01-01
Background The objectives of this research were (a) to describe the current status of grant review for biomedical projects and programmes from the perspectives of international funding organisations and grant reviewers, and (b) to explore funders' interest in developing uniform requirements for grant review aimed at making the processes and practices of grant review more consistent, transparent, and user friendly. Methods A survey to a convenience sample of 57 international public and private organisations that give grants for biomedical research was conducted. Nine participating organisations then emailed a random sample of their external reviewers an invitation to participate in a second electronic survey. Results A total of 28 of 57 (49%) organisations in 19 countries responded. Organisations reported these problems as frequent or very frequent: declined review requests (16), late reports (10), administrative burden (7), difficulty finding new reviewers (4), and reviewers not following guidelines (4). The administrative burden of the process was reported to have increased over the past 5 years. In all, 17 organisations supported the idea of uniform requirements for conducting grant review and for formatting grant proposals. A total of 258/418 (62%) reviewers responded from 22 countries. Of those, 48% (123/258) said their institutions encouraged grant review, yet only 7% (17/258) were given protected time and 74% (192/258) received no academic recognition for this. Reviewers rated these factors as extremely or very important in deciding to review proposals: 51% (131/258) desire to support external fairness, 47% (120/258) professional duty, 46% (118/258) relevance of the proposal's topic, 43% (110/258) wanting to keep up to date, 40% (104/258) desire to avoid suppression of innovation. Only 16% (42/258) reported that guidance from funders was very clear. In all, 85% (220/258) had not been trained in grant review and 64% (166/258) wanted this. Conclusions Funders reported a growing workload of biomedical proposals that is getting harder to peer review. Just under half of grant reviewers take part for the good of science and professional development, but many report lack of academic and practical support and clear guidance. Around two-thirds of funders supported the development of uniform requirements for the format and peer review of proposals to help ease the current situation. PMID:20961441
Ahn, WonSool; Lee, Joon-Man
2015-11-01
The effects of MWCNT on the cell sizes, cell uniformities, thermal conductivities, bulk densities, foaming kinetics, and compressive mechanical properties of the rigid PUFs were investigated. To obtain the better uniform dispersed state of MWCNT, grease-type master batch of MWCNT/surfactant was prepared by three-roll mill. Average cell size of the PUF samples decreased from 185.1 for the neat PUF to 162.9 μm for the sample of 0.01 phr of MWCNT concentration. Cell uniformity was also enhanced showing the standard cell-size deviation of 61.7 and 35.2, respectively. While the thermal conductivity of the neat PUF was 0.0222 W/m(o)K, that of the sample with 0.01 phr of MWCNT showed 0.0204 W/m(o)K, resulting 8.2% reduction of the thermal conductivity. Bulk density of the PUF samples was observed as nearly the same values as 30.0 ± 1.0 g/cm3 regardless of MWCNT. Temperature profiles during foaming process showed that an indirect indication of the nucleation effect of MWCNT for the PUF foaming system, showing faster and higher temperature rising with time. The compressive yield stress is nearly the same as 0.030 x 10(5) Pa regardless of MWCNT.
Screenometer: a device for sampling vegetative screening in forested areas
Victor A. Rudis
1985-01-01
A-device for estimating the degree to which vegetation and other obstructions screen forested areas has been adapted to an extensive sampling design for forest surveys. Procedures are recommended to assure that uniform measurements can be made. Examination of sources of sampling variation (observers, points within sampled locations, series of observations within points...
An approach for addressing hard-to-detect hot spots.
Abelquist, Eric W; King, David A; Miller, Laurence F; Viars, James A
2013-05-01
The Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM) survey approach is comprised of systematic random sampling coupled with radiation scanning to assess acceptability of potential hot spots. Hot spot identification for some radionuclides may not be possible due to the very weak gamma or x-ray radiation they emit-these hard-to-detect nuclides are unlikely to be identified by field scans. Similarly, scanning technology is not yet available for chemical contamination. For both hard-to-detect nuclides and chemical contamination, hot spots are only identified via volumetric sampling. The remedial investigation and cleanup of sites under the Comprehensive Environmental Response, Compensation, and Liability Act typically includes the collection of samples over relatively large exposure units, and concentration limits are applied assuming the contamination is more or less uniformly distributed. However, data collected from contaminated sites demonstrate contamination is often highly localized. These highly localized areas, or hot spots, will only be identified if sample densities are high or if the environmental characterization program happens to sample directly from the hot spot footprint. This paper describes a Bayesian approach for addressing hard-to-detect nuclides and chemical hot spots. The approach begins using available data (e.g., as collected using the standard approach) to predict the probability that an unacceptable hot spot is present somewhere in the exposure unit. This Bayesian approach may even be coupled with the graded sampling approach to optimize hot spot characterization. Once the investigator concludes that the presence of hot spots is likely, then the surveyor should use the data quality objectives process to generate an appropriate sample campaign that optimizes the identification of risk-relevant hot spots.
Analysis of serum and cerebrospinal fluid in clinically normal adult miniature donkeys.
Mozaffari, A A; Samadieh, H
2013-09-01
To establish reference intervals for serum and cerebrospinal fluid (CSF) parameters in clinically healthy adult miniature donkeys. Experiments were conducted on 10 female and 10 male clinically normal adult miniature donkeys, randomly selected from five herds. Lumbosacral CSF collection was performed with the sedated donkey in the standing position. Cell analysis was performed immediately after the samples were collected. Blood samples were obtained from the jugular vein immediately after CSF sample collection. Sodium, potassium, glucose, urea nitrogen, total protein, calcium, chloride, phosphorous and magnesium concentrations were measured in CSF and serum samples. A paired t-test was used to compare mean values between female and male donkeys. The CSF was uniformly clear, colourless and free from flocculent material, with a specific gravity of 1.002. The range of total nucleated cell counts was 2-4 cells/μL. The differential white cell count comprised only small lymphocytes. No erythrocytes or polymorphonuclear cells were observed on cytological examination. Reference values were obtained for biochemical analysis of serum and CSF. Gender had no effect on any variables measured in serum or CSF (p>0.05). CSF analysis can provide important information in addition to that gained by clinical examination. CSF analysis has not previously been performed in miniature donkeys; this is the first report on the subject. In the present study, reference intervals for total nucleated cell count, total protein, glucose, urea nitrogen, sodium, potassium, chloride, calcium, phosphorous and magnesium concentrations of serum and CSF were determined for male and female miniature donkeys.
Dual-wavelength OR-PAM with compressed sensing for cell tracking in a 3D cell culture system
NASA Astrophysics Data System (ADS)
Huang, Rou-Xuan; Fu, Ying; Liu, Wang; Ma, Yu-Ting; Hsieh, Bao-Yu; Chen, Shu-Ching; Sun, Mingjian; Li, Pai-Chi
2018-02-01
Monitoring dynamic interactions of T cells migrating toward tumor is beneficial to understand how cancer immunotherapy works. Optical-resolution photoacoustic microscope (OR-PAM) can provide not only high spatial resolution but also deeper penetration than conventional optical microscopy. With the aid of exogenous contrast agents, the dual-wavelength OR-PAM can be applied to map the distribution of CD8+ cytotoxic T lymphocytes (CTLs) with gold nanospheres (AuNS) under 523nm laser irradiation and Hepta1-6 tumor spheres with indocyanine green (ICG) under 800nm irradiation. However, at 1K laser PRF, it takes approximately 20 minutes to obtain a full sample volume of 160 × 160 × 150 μm3 . To increase the imaging rate, we propose a random non-uniform sparse sampling mechanism to achieve fast sparse photoacoustic data acquisition. The image recovery process is formulated as a low-rank matrix recovery (LRMR) based on compressed sensing (CS) theory. We show that it could be stably recovered via nuclear-norm minimization optimization problem to maintain image quality from a significantly fewer measurement. In this study, we use the dual-wavelength OR-PAM with CS to visualize T cell trafficking in a 3D culture system with higher temporal resolution. Data acquisition time is reduced by 40% in such sample volume where sampling density is 0.5. The imaging system reveals the potential to understand the dynamic cellular process for preclinical screening of anti-cancer drugs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tsung-Jui; Wu, Yuh-Renn, E-mail: yrwu@ntu.edu.tw; Shivaraman, Ravi
2014-09-21
In this paper, we describe the influence of the intrinsic indium fluctuation in the InGaN quantum wells on the carrier transport, efficiency droop, and emission spectrum in GaN-based light emitting diodes (LEDs). Both real and randomly generated indium fluctuations were used in 3D simulations and compared to quantum wells with a uniform indium distribution. We found that without further hypothesis the simulations of electrical and optical properties in LEDs such as carrier transport, radiative and Auger recombination, and efficiency droop are greatly improved by considering natural nanoscale indium fluctuations.
Small violations of Bell inequalities for multipartite pure random states
NASA Astrophysics Data System (ADS)
Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.
2018-05-01
For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.
Standard random number generation for MBASIC
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.
ERIC Educational Resources Information Center
Rojas-Trigos, J. B.; Bermejo-Arenas, J. A.; Marin, E.
2012-01-01
In this paper, some heat transfer characteristics through a sample that is uniformly heated on one of its surfaces by a power density modulated by a periodical square wave are discussed. The solution of this problem has two contributions, comprising a transient term and an oscillatory term, superposed to it. The analytical solution is compared to…
Sarang, S; Sastry, S K; Gaines, J; Yang, T C S; Dunne, P
2007-06-01
The electrical conductivity of food components is critical to ohmic heating. Food components of different electrical conductivities heat at different rates. While equal electrical conductivities of all phases are desirable, real food products may behave differently. In the present study involving chicken chow mein consisting of a sauce and different solid components, celery, water chestnuts, mushrooms, bean sprouts, and chicken, it was observed that the sauce was more conductive than all solid components over the measured temperature range. To improve heating uniformity, a blanching method was developed to increase the ionic content of the solid components. By blanching different solid components in a highly conductive sauce at 100 degrees C for different lengths of time, it was possible to adjust their conductivity to that of the sauce. Chicken chow mein samples containing blanched particulates were compared with untreated samples with respect to ohmic heating uniformity at 60 Hz up to 140 degrees C. All components of the treated product containing blanched solids heated more uniformly than untreated product. In sensory tests, 3 different formulations of the blanched product showed good quality attributes and overall acceptability, demonstrating the practical feasibility of the blanching protocol.
Ultra-low power, highly uniform polymer memory by inserted multilayer graphene electrode
NASA Astrophysics Data System (ADS)
Jang, Byung Chul; Seong, Hyejeong; Kim, Jong Yun; Koo, Beom Jun; Kim, Sung Kyu; Yang, Sang Yoon; Gap Im, Sung; Choi, Sung-Yool
2015-12-01
Filament type resistive random access memory (RRAM) based on polymer thin films is a promising device for next generation, flexible nonvolatile memory. However, the resistive switching nonuniformity and the high power consumption found in the general filament type RRAM devices present critical issues for practical memory applications. Here, we introduce a novel approach not only to reduce the power consumption but also to improve the resistive switching uniformity in RRAM devices based on poly(1,3,5-trimethyl-3,4,5-trivinyl cyclotrisiloxane) by inserting multilayer graphene (MLG) at the electrode/polymer interface. The resistive switching uniformity was thereby significantly improved, and the power consumption was markedly reduced by 250 times. Furthermore, the inserted MLG film enabled a transition of the resistive switching operation from unipolar resistive switching to bipolar resistive switching and induced self-compliance behavior. The findings of this study can pave the way toward a new area of application for graphene in electronic devices.
Wilder-Smith, A; Lover, A; Kittayapong, P; Burnham, G
2011-06-01
Dengue infection causes a significant economic, social and medical burden in affected populations in over 100 countries in the tropics and sub-tropics. Current dengue control efforts have generally focused on vector control but have not shown major impact. School-aged children are especially vulnerable to infection, due to sustained human-vector-human transmission in the close proximity environments of schools. Infection in children has a higher rate of complications, including dengue hemorrhagic fever and shock syndromes, than infections in adults. There is an urgent need for integrated and complementary population-based strategies to protect vulnerable children. We hypothesize that insecticide-treated school uniforms will reduce the incidence of dengue in school-aged children. The hypothesis would need to be tested in a community based randomized trial. If proven to be true, insecticide-treated school uniforms would be a cost-effective and scalable community based strategy to reduce the burden of dengue in children. Copyright © 2011 Elsevier Ltd. All rights reserved.
Population pharmacokinetics of valnemulin in swine.
Zhao, D H; Zhang, Z; Zhang, C Y; Liu, Z C; Deng, H; Yu, J J; Guo, J P; Liu, Y H
2014-02-01
This study was carried out in 121 pigs to develop a population pharmacokinetic (PPK) model by oral (p.o.) administration of valnemulin at a single dose of 10 mg/kg. Serum biochemistry parameters of each pig were determined prior to drug administration. Three to five blood samples were collected at random time points, but uniformly distributed in the absorption, distribution, and elimination phases of drug disposition. Plasma concentrations of valnemulin were determined by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The concentration-time data were fitted to PPK models using nonlinear mixed effect modeling (NONMEM) with G77 FORTRAN compiler. NONMEM runs were executed using Wings for NONMEM. Fixed effects of weight, age, sex as well as biochemistry parameters, which may influence the PK of valnemulin, were investigated. The drug concentration-time data were adequately described by a one-compartmental model with first-order absorption. A random effect model of valnemulin revealed a pattern of log-normal distribution, and it satisfactorily characterized the observed interindividual variability. The distribution of random residual errors, however, suggested an additive model for the initial phase (<12 h) followed by a combined model that consists of both proportional and additive features (≥ 12 h), so that the intra-individual variability could be sufficiently characterized. Covariate analysis indicated that body weight had a conspicuous effect on valnemulin clearance (CL/F). The featured population PK values of Ka , V/F and CL/F were 0.292/h, 63.0 L and 41.3 L/h, respectively. © 2013 John Wiley & Sons Ltd.
Turbulent, Extreme Multi-zone Model for Simulating Flux and Polarization Variability in Blazars
NASA Astrophysics Data System (ADS)
Marscher, Alan P.
2014-01-01
The author presents a model for variability of the flux and polarization of blazars in which turbulent plasma flowing at a relativistic speed down a jet crosses a standing conical shock. The shock compresses the plasma and accelerates electrons to energies up to γmax >~ 104 times their rest-mass energy, with the value of γmax determined by the direction of the magnetic field relative to the shock front. The turbulence is approximated in a computer code as many cells, each with a uniform magnetic field whose direction is selected randomly. The density of high-energy electrons in the plasma changes randomly with time in a manner consistent with the power spectral density of flux variations derived from observations of blazars. The variations in flux and polarization are therefore caused by continuous noise processes rather than by singular events such as explosive injection of energy at the base of the jet. Sample simulations illustrate the behavior of flux and linear polarization versus time that such a model produces. The variations in γ-ray flux generated by the code are often, but not always, correlated with those at lower frequencies, and many of the flares are sharply peaked. The mean degree of polarization of synchrotron radiation is higher and its timescale of variability shorter toward higher frequencies, while the polarization electric vector sometimes randomly executes apparent rotations. The slope of the spectral energy distribution exhibits sharper breaks than can arise solely from energy losses. All of these results correspond to properties observed in blazars.
Distributing Radiant Heat in Insulation Tests
NASA Technical Reports Server (NTRS)
Freitag, H. J.; Reyes, A. R.; Ammerman, M. C.
1986-01-01
Thermally radiating blanket of stepped thickness distributes heat over insulation sample during thermal vacuum testing. Woven of silicon carbide fibers, blanket spreads heat from quartz lamps evenly over insulation sample. Because of fewer blanket layers toward periphery of sample, more heat initially penetrates there for more uniform heat distribution.
ERIC Educational Resources Information Center
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua
2006-01-01
Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused…
Magnetic noise as the cause of the spontaneous magnetization reversal of RE–TM–B permanent magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitriev, A. I., E-mail: aid@icp.ac.ru; Talantsev, A. D., E-mail: artgtx32@mail.ru; Kunitsyna, E. I.
2016-08-15
The relation between the macroscopic spontaneous magnetization reversal (magnetic viscosity) of (NdDySm)(FeCo)B alloys and the spectral characteristics of magnetic noise, which is caused by the random microscopic processes of thermally activated domain wall motion in a potential landscape with uniformly distributed potential barrier heights, is found.
Experimental Evaluation of Field Trips on Instruction in Vocational Agriculture.
ERIC Educational Resources Information Center
McCaslin, Norval L.
To determine the effect of field trips on student achievement in each of four subject matter areas in vocational agriculture, 12 schools offering approved programs were randomly selected and divided into a treatment group and a control group. Uniform teaching outlines and reference materials were provided to each group. While no field trips were…
Electrolytic plating apparatus for discrete microsized particles
Mayer, Anton
1976-11-30
Method and apparatus are disclosed for electrolytically producing very uniform coatings of a desired material on discrete microsized particles. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with a powered cathode for a time sufficient for such to occur.
Electroless plating apparatus for discrete microsized particles
Mayer, Anton
1978-01-01
Method and apparatus are disclosed for producing very uniform coatings of a desired material on discrete microsized particles by electroless techniques. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with each other for a time sufficient for such to occur.
Robin A. J. Taylor; Daniel A. Herms; Louis R. Iverson
2008-01-01
The dispersal of organisms is rarely random, although diffusion processes can be useful models for movement in approximately homogeneous environments. However, the environments through which all organisms disperse are far from uniform at all scales. The emerald ash borer (EAB), Agrilus planipennis, is obligate on ash (Fraxinus spp...
Fermilab | Science | Historic Results
quark since the discovery of the bottom quark at Fermilab through fixed-target experiments in 1977. Both cosmic rays. Researchers previously had assumed that cosmic rays approach the Earth uniformly from random impact the Earth generally come from the direction of active galactic nuclei. Many large galaxies
Hsieh, Anne M-Y; Polyakova, Olena; Fu, Guodong; Chazen, Ronald S; MacMillan, Christina; Witterick, Ian J; Ralhan, Ranju; Walfish, Paul G
2018-04-13
Recognition of noninvasive follicular thyroid neoplasms with papillary-like nuclear features (NIFTP) that distinguishes them from invasive malignant encapsulated follicular variant of papillary thyroid carcinoma (EFVPTC) can prevent overtreatment of NIFTP patients. We and others have previously reported that programmed death-ligand 1 (PD-L1) is a useful biomarker in thyroid tumors; however, all reports to date have relied on manual scoring that is time consuming as well as subject to individual bias. Consequently, we developed a digital image analysis (DIA) protocol for cytoplasmic and membranous stain quantitation (ThyApp) and evaluated three tumor sampling methods [Systemic Uniform Random Sampling, hotspot nucleus, and hotspot nucleus/3,3'-Diaminobenzidine (DAB)]. A patient cohort of 153 cases consisting of 48 NIFTP, 44 EFVPTC, 26 benign nodules and 35 encapsulated follicular lesions/neoplasms with lymphocytic thyroiditis (LT) was studied. ThyApp quantitation of PD-L1 expression revealed a significant difference between invasive EFVPTC and NIFTP; but none between NIFTP and benign nodules. ThyApp integrated with hotspot nucleus tumor sampling method demonstrated to be most clinically relevant, consumed least processing time, and eliminated interobserver variance. In conclusion, the fully automatic DIA algorithm developed using a histomorphological approach objectively quantitated PD-L1 expression in encapsulated thyroid neoplasms and outperformed manual scoring in reproducibility and higher efficiency.
Random walks of colloidal probes in viscoelastic materials
NASA Astrophysics Data System (ADS)
Khan, Manas; Mason, Thomas G.
2014-04-01
To overcome limitations of using a single fixed time step in random walk simulations, such as those that rely on the classic Wiener approach, we have developed an algorithm for exploring random walks based on random temporal steps that are uniformly distributed in logarithmic time. This improvement enables us to generate random-walk trajectories of probe particles that span a highly extended dynamic range in time, thereby facilitating the exploration of probe motion in soft viscoelastic materials. By combining this faster approach with a Maxwell-Voigt model (MVM) of linear viscoelasticity, based on a slowly diffusing harmonically bound Brownian particle, we rapidly create trajectories of spherical probes in soft viscoelastic materials over more than 12 orders of magnitude in time. Appropriate windowing of these trajectories over different time intervals demonstrates that random walk for the MVM is neither self-similar nor self-affine, even if the viscoelastic material is isotropic. We extend this approach to spatially anisotropic viscoelastic materials, using binning to calculate the anisotropic mean square displacements and creep compliances along different orthogonal directions. The elimination of a fixed time step in simulations of random processes, including random walks, opens up interesting possibilities for modeling dynamics and response over a highly extended temporal dynamic range.
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-01-01
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-10-22
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu
2017-04-01
In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less
Ryan Clarke, P; Frey, Rebecca K; Rhyan, Jack C; McCollum, Matt P; Nol, Pauline; Aune, Keith
2014-03-01
OBJECTIVE--To determine the feasibility of qualifying individuals or groups of Yellowstone National Park bison as free from brucellosis. DESIGN--Cohort study. SAMPLE--Serum, blood, and various samples from live bison and tissues taken at necropsy from 214 bison over 7 years. PROCEDURES--Blood was collected from bison every 30 to 45 days for serologic tests and microbiological culture of blood for Brucella abortus. Seropositive bison were euthanized until all remaining bison had 2 consecutive negative test results. Half the seronegative bison were randomly euthanized, and tissues were collected for bacteriologic culture. The remaining seronegative bison were bred, and blood was tested at least twice per year. Cow-calf pairs were sampled immediately after calving and 6 months after calving for evidence of B abortus. RESULTS--Post-enrollment serial testing for B abortus antibodies revealed no bison that seroconverted after 205 days (first cohort) and 180 days (second cohort). During initial serial testing, 85% of bison seroconverted within 120 days after removal from the infected population. Brucella abortus was not cultured from any euthanized seronegative bison (0/88). After parturition, no cows or calves had a positive test result for B abortus antibodies, nor was B abortus cultured from any samples. CONCLUSIONS AND CLINICAL RELEVANCE--Results suggested it is feasible to qualify brucellosis-free bison from an infected herd following quarantine procedures as published in the USDA APHIS brucellosis eradication uniform methods and rules. Latent infection was not detected in this sample of bison when applying the USDA APHIS quarantine protocol.
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
Distinguishability of generic quantum states
NASA Astrophysics Data System (ADS)
Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol
2016-06-01
Properties of random mixed states of dimension N distributed uniformly with respect to the Hilbert-Schmidt measure are investigated. We show that for large N , due to the concentration of measure, the trace distance between two random states tends to a fixed number D ˜=1 /4 +1 /π , which yields the Helstrom bound on their distinguishability. To arrive at this result, we apply free random calculus and derive the symmetrized Marchenko-Pastur distribution, which is shown to describe numerical data for the model of coupled quantum kicked tops. Asymptotic value for the root fidelity between two random states, √{F }=3/4 , can serve as a universal reference value for further theoretical and experimental studies. Analogous results for quantum relative entropy and Chernoff quantity provide other bounds on the distinguishablity of both states in a multiple measurement setup due to the quantum Sanov theorem. We study also mean entropy of coherence of random pure and mixed states and entanglement of a generic mixed state of a bipartite system.
Single-mode SOA-based 1kHz-linewidth dual-wavelength random fiber laser.
Xu, Yanping; Zhang, Liang; Chen, Liang; Bao, Xiaoyi
2017-07-10
Narrow-linewidth multi-wavelength fiber lasers are of significant interests for fiber-optic sensors, spectroscopy, optical communications, and microwave generation. A novel narrow-linewidth dual-wavelength random fiber laser with single-mode operation, based on the semiconductor optical amplifier (SOA) gain, is achieved in this work for the first time, to the best of our knowledge. A simplified theoretical model is established to characterize such kind of random fiber laser. The inhomogeneous gain in SOA mitigates the mode competition significantly and alleviates the laser instability, which are frequently encountered in multi-wavelength fiber lasers with Erbium-doped fiber gain. The enhanced random distributed feedback from a 5km non-uniform fiber provides coherent feedback, acting as mode selection element to ensure single-mode operation with narrow linewidth of ~1kHz. The laser noises are also comprehensively investigated and studied, showing the improvements of the proposed random fiber laser with suppressed intensity and frequency noises.
An invariance property of generalized Pearson random walks in bounded geometries
NASA Astrophysics Data System (ADS)
Mazzolo, Alain
2009-03-01
Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.
Frequency-dependent scaling from mesoscale to macroscale in viscoelastic random composites
Zhang, Jun
2016-01-01
This paper investigates the scaling from a statistical volume element (SVE; i.e. mesoscale level) to representative volume element (RVE; i.e. macroscale level) of spatially random linear viscoelastic materials, focusing on the quasi-static properties in the frequency domain. Requiring the material statistics to be spatially homogeneous and ergodic, the mesoscale bounds on the RVE response are developed from the Hill–Mandel homogenization condition adapted to viscoelastic materials. The bounds are obtained from two stochastic initial-boundary value problems set up, respectively, under uniform kinematic and traction boundary conditions. The frequency and scale dependencies of mesoscale bounds are obtained through computational mechanics for composites with planar random chessboard microstructures. In general, the frequency-dependent scaling to RVE can be described through a complex-valued scaling function, which generalizes the concept originally developed for linear elastic random composites. This scaling function is shown to apply for all different phase combinations on random chessboards and, essentially, is only a function of the microstructure and mesoscale. PMID:27274689
Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L
2018-03-01
An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.
Desta, Etaferahu Alamaw; Gebrie, Mignote Hailu; Dachew, Berihun Assefa
2015-01-01
Wearing uniforms help in the formation of professional identity in healthcare. It fosters a strong self image and professional identity which can lead to good confidence and better performance in nursing practice. However, most nurses in Ethiopia are not wearing nursing uniforms and the reasons remain unclear. Therefore, the aim of this research is to assess nurse uniform wearing practices among nurses and factors associated with such practice in hospitals in Northwest Ethiopia. A hospital based cross-sectional study was conducted from March to April, 2014 in five hospitals located in Northwest Ethiopia. A total 459 nurses participated in the study. Data was collected using a pre-tested self-administered questionnaire. Descriptive statistics were analyzed in order to characterize the study population. Bivariate and multiple logistic regression models were fitted. Odds ratios with 95 % confidence intervals were computed to identify factors associated with nursing uniform practice. Nurse uniform wearing practice was found to be 49.2 % of the total sample size. Around 35 % of the respondents that did not implement nurse uniform wearing practices stated that there was no specific uniform for nurses recommended by hospital management. In addition to this, nurse uniform wearing practices were positively associated with being female [AOR = 1.58, 95 % CI (1.02, 2.44)], studying nursing by choice [AOR =3.16, 95 % CI (2.03, 4.92)], and the appeal of nursing uniforms to nurses [AOR = 3.43 95 % CI (1.96, 5.98)]. Nurse uniform wearing practices were not exceptionally prevalent in Northwest Ethiopian hospitals. However, encouraging students to pursue interest-based careers and implementing a nurse uniform wearing policy may have the potential to improve such practices.
Efficient sampling of complex network with modified random walk strategies
NASA Astrophysics Data System (ADS)
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
2009-01-01
Many studies of RNA folding and catalysis have revealed conformational heterogeneity, metastable folding intermediates, and long-lived states with distinct catalytic activities. We have developed a single-molecule imaging approach for investigating the functional heterogeneity of in vitro-evolved RNA aptamers. Monitoring the association of fluorescently labeled ligands with individual RNA aptamer molecules has allowed us to record binding events over the course of multiple days, thus providing sufficient statistics to quantitatively define the kinetic properties at the single-molecule level. The ligand binding kinetics of the highly optimized RNA aptamer studied here displays a remarkable degree of uniformity and lack of memory. Such homogeneous behavior is quite different from the heterogeneity seen in previous single-molecule studies of naturally derived RNA and protein enzymes. The single-molecule methods we describe may be of use in analyzing the distribution of functional molecules in heterogeneous evolving populations or even in unselected samples of random sequences. PMID:19572753
Cramer-Rao bound analysis of wideband source localization and DOA estimation
NASA Astrophysics Data System (ADS)
Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung
2002-12-01
In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.
Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.
Chalmers, R Philip
2018-06-01
This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.
Laser-treated electrospun fibers loaded with nano-hydroxyapatite for bone tissue engineering.
Aragon, Javier; Navascues, Nuria; Mendoza, Gracia; Irusta, Silvia
2017-06-15
Core-shell polycaprolactone/polycaprolactone (PCL/PCL) and polycaprolactone/polyvinyl acetate (PCL/PVAc) electrospun fibers loaded with synthesized nanohydroxyapatite (HAn) were lased treated to create microporosity. The prepared materials were characterized by XRD, FTIR, TEM and SEM. Uniform and randomly oriented beadless fibrous structures were obtained in all cases. Fibers diameters were in the 150-300nm range. Needle-like HAn nanoparticles with mean diameters of 20nm and length of approximately 150nm were mostly encase inside the fibers. Laser treated materials present micropores with diameters in the range 70-120μm for PCL-HAn/PCL fibers and in the 50-90μm range for PCL-HAn/PVAC material. Only samples containing HAn presented bioactivity after incubation during 30days in simulated body fluid. All scaffolds presented high viability, very low mortality, and human osteoblast proliferation. Biocompatibility was increased by laser treatment due to the surface and porosity modification. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Banks, Bruce A. (Inventor)
2008-01-01
Disclosed is a method of producing cones and pillars on polymethylmethacralate (PMMA) optical fibers for glucose monitoring. The method, in one embodiment, consists of using electron beam evaporation to deposit a non-contiguous thin film of aluminum on the distal ends of the PMMA fibers. The partial coverage of aluminum on the fibers is randomly, but rather uniformly distributed across the end of the optical fibers. After the aluminum deposition, the ends of the fibers are then exposed to hyperthermal atomic oxygen, which oxidizes the areas that are not protected by aluminum. The resulting PMMA fibers have a greatly increased surface area and the cones or pillars are sufficiently close together that the cellular components in blood are excluded from passing into the valleys between the cones and pillars. The optical fibers are then coated with appropriated surface chemistry so that they can optically sense the glucose level in the blood sample than that with conventional glucose monitoring.
Tondare, Vipin N; Villarrubia, John S; Vlada R, András E
2017-10-01
Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
.... SUMMARY: This rule modifies the aflatoxin sampling and testing regulations currently prescribed under the... Administrative Committee for Pistachios (Committee). This rule streamlines the aflatoxin sampling and testing... by providing a uniform and consistent aflatoxin sampling and testing procedure for pistachios shipped...
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1989-01-01
Four techniques for uniform sampling of band-bass signals are examined. The in-phase and quadrature components of the band-pass signal are computed in terms of the samples of the original band-pass signal. The relative implementation merits of these techniques are discussed with reference to the Deep Space Network (DSN).
NASA Astrophysics Data System (ADS)
Covaci, D.; Costea, C.; Dumitras, D.; Duliu, O. G.
2012-04-01
Ornamental limestone and marble samples were collected and analysed by means of Electron Paramagnetic Resonance (EPR), Scanning Electron Microscopy (SEM) and X-Ray Diffraction (XRD), in order to evidence any systematic peculiarities able to be used in further provenance studies as well as to get more detailed information regarding geochemistry and mineralogy of three of the most important deposits from Romania. In this respect, 20 samples of limestone (Arnota quarry, Capatani Mountains and Mateias South quarry, Iezer Mountains) and 9 of calci-dolomitic marble (Porumbacu de Sus quarry, Fagaras Mountains) were collected over a significant sampling area. EPR spectroscopy, primarily used to asset the degree of homogeneity of considered samples, evidenced, for both Arnota and Mateias South limestone, the presence of a typical six hyperfine lines spectrum of Mn2+ ions in calcite but no traces of Fe ferromagnetic clusters. A more careful investigation has showed that although within the same quarry, there were no significant differences regarding EPR spectra, the resonance lines were systematic narrower in the case of Mateias South samples which suggested a lower content of divalent manganese ions. The Porumbacu calci-dolomitic marble, presented a more intricate Mn2+ spectrum, consisting of a superposition of typical dolomitic and calcitic spectra. Again, the EPR spectra were almost identical, attesting, as in the previous cases, a relative uniform distribution of paramagnetic Mn2+ ions within quarry. In the case of SEM, scattered, back scattered and absorbed electron modes were used to visualise the mineral formations on the sample surfaces while an EDAX quantitative analysis was used to determine the content of the most abundant elements. Although, at a first inspection, both groups of limestone looked almost similar, displaying a great variety of randomly orientated micro-crystalline agglomeration, only in the case of Arnota samples, we have noticed the presence of some micron size graphite inclusions, potential proxies for further provenance studies. The Porumbacu South marble showed a different pattern, characterized by a more uniform crystallite distribution, all of them presenting almost perfect cleaving surfaces. EDAX results evidenced, excepting the dominant Ca and Mg (the last one in the case of Porumbacu de Sus marble), the presence, in small quantities, of some other element such as Fe, Ni, Cu and Zn whose content represent also a good provenance proxy. XRD investigation evidenced not only of the dominant calcite and dolomite mineral phases, but also other minor mineral fraction, whose presence could be well related to the content of mentioned trace elements. Principal Component and Cluster Analysis, finally used to classify all investigated samples, allowed us to group them in three cluster in accordance with their provenance.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Design Techniques for Uniform-DFT, Linear Phase Filter Banks
NASA Technical Reports Server (NTRS)
Sun, Honglin; DeLeon, Phillip
1999-01-01
Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.
New non-linear photovoltaic effect in uniform bipolar semiconductor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volovichev, I.
2014-11-21
A linear theory of the new non-linear photovoltaic effect in the closed circuit consisting of a non-uniformly illuminated uniform bipolar semiconductor with neutral impurities is developed. The non-uniform photo-excitation of impurities results in the position-dependant current carrier mobility that breaks the semiconductor homogeneity and induces the photo-electromotive force (emf). As both the electron (or hole) mobility gradient and the current carrier generation rate depend on the light intensity, the photo-emf and the short-circuit current prove to be non-linear functions of the incident light intensity at an arbitrarily low illumination. The influence of the sample size on the photovoltaic effect magnitudemore » is studied. Physical relations and distinctions between the considered effect and the Dember and bulk photovoltaic effects are also discussed.« less
The purpose of this SOP is to provide a uniform procedure for the financial reimbursement of primary respondents for the collection of diet samples. Respondents were reimbursed for replicate food and beverage samples by type and amount collected over a 24-hour sampling period. ...
The purpose of this SOP is to provide a uniform procedure for the financial reimbursement of primary respondents for the collection of diet samples. Respondents were reimbursed for replicate food and beverage samples by type and amount collected over a 24-hour sampling period. ...
Sampling Large Graphs for Anticipatory Analytics
2015-05-15
low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges
Qin, Li-Xuan; Levine, Douglas A
2016-06-10
Accurate discovery of molecular biomarkers that are prognostic of a clinical outcome is an important yet challenging task, partly due to the combination of the typically weak genomic signal for a clinical outcome and the frequently strong noise due to microarray handling effects. Effective strategies to resolve this challenge are in dire need. We set out to assess the use of careful study design and data normalization for the discovery of prognostic molecular biomarkers. Taking progression free survival in advanced serous ovarian cancer as an example, we conducted empirical analysis on two sets of microRNA arrays for the same set of tumor samples: arrays in one set were collected using careful study design (that is, uniform handling and randomized array-to-sample assignment) and arrays in the other set were not. We found that (1) handling effects can confound the clinical outcome under study as a result of chance even with randomization, (2) the level of confounding handling effects can be reduced by data normalization, and (3) good study design cannot be replaced by post-hoc normalization. In addition, we provided a practical approach to define positive and negative control markers for detecting handling effects and assessing the performance of a normalization method. Our work showcased the difficulty of finding prognostic biomarkers for a clinical outcome of weak genomic signals, illustrated the benefits of careful study design and data normalization, and provided a practical approach to identify handling effects and select a beneficial normalization method. Our work calls for careful study design and data analysis for the discovery of robust and translatable molecular biomarkers.
Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2016-01-01
In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.
The one-dimensional asymmetric persistent random walk
NASA Astrophysics Data System (ADS)
Rossetto, Vincent
2018-04-01
Persistent random walks are intermediate transport processes between a uniform rectilinear motion and a Brownian motion. They are formed by successive steps of random finite lengths and directions travelled at a fixed speed. The isotropic and symmetric 1D persistent random walk is governed by the telegrapher’s equation, also called the hyperbolic heat conduction equation. These equations have been designed to resolve the paradox of the infinite speed in the heat and diffusion equations. The finiteness of both the speed and the correlation length leads to several classes of random walks: Persistent random walk in one dimension can display anomalies that cannot arise for Brownian motion such as anisotropy and asymmetries. In this work we focus on the case where the mean free path is anisotropic, the only anomaly leading to a physics that is different from the telegrapher’s case. We derive exact expression of its Green’s function, for its scattering statistics and distribution of first-passage time at the origin. The phenomenology of the latter shows a transition for quantities like the escape probability and the residence time.
NASA Astrophysics Data System (ADS)
Hassan, A. M.; Martys, N. S.; Garboczi, E. J.; McMichael, R. D.; Stiles, M. D.; Plusquellic, D. F.; Stutzman, P. E.; Wang, S.; Provenzano, V.; Surek, J. T.; Novotny, D. R.; Coder, J. B.; Janezic, M. D.; Kim, S.
2014-02-01
Some iron oxide corrosion products exhibit antiferromagnetic magnetic resonances (AFMR) at frequencies on the order of 100 GHz at ambient temperatures. AFMR can be detected in laboratory conditions, which serves as the basis for a new non-destructive spectroscopic method for detecting early corrosion. When attempting to measure the steel corrosion in reinforced concrete in the field, rebar geometry must be taken into account. Experiments and numerical simulations have been developed at frequencies near 100 GHz to sort out these effects. The experimental setup involves a vector network analyzer with converter heads to up-convert the output frequency, which is then connected to a horn antenna followed by a 7.5 cm diameter polymer lens to focus the waves on the sample. Two sets of samples were studied: uniform cylindrical rods and rebar corrosion samples broken out of concrete with different kinds of coatings. Electromagnetic scattering from uniform rods were calculated numerically using classical modal expansion. A finite-element electromagnetic solver was used to model more complex rebar geometry and non-uniform corrosion layers. Experimental and numerical data were compared to help quantify and understand the anticipated effect of local geometrical features on AFMR measurements.
John C. Weber; Frank C. Sorensen
1990-01-01
Effects of stratification period and incubation temperature on seed germination speed and uniformity were investigated in a bulked seed lot of 200 ponderosa pine trees (Pinus ponderosa Dougl. ex Laws.) sampled from 149 locations in central Oregon. Mean rate of embryo development towards germination (l/days to 50 percent germination) and standard...
On the Heat Transfer through a Solid Slab Heated Uniformly and Continuously on One of Its Surfaces
ERIC Educational Resources Information Center
Marin, E.; Lara-Bernal, A.; Calderon, A.; Delgado-Vasallo, O.
2011-01-01
Some peculiarities of the heat transfer through a sample that is heated by the superficial absorption of light energy under continuous uniform illumination are discussed. We explain, using a different approach to that presented in a recent article published in this journal (Salazar "et al" 2010 "Eur. J. Phys." 31 1053-9), that the front surface of…
NASA Astrophysics Data System (ADS)
Congreve, Jasmin V. J.; Shi, Yunhua; Dennis, Anthony R.; Durrell, John H.; Cardwell, David A.
2017-01-01
A major limitation to the widespread application of Y-Ba-Cu-O (YBCO) bulk superconductors is the relative complexity and low yield of the top seeded melt growth (TSMG) process, by which these materials are commonly fabricated. It has been demonstrated in previous work on the recycling of samples in which the primary growth had failed, that the provision of an additional liquid-rich phase to replenish liquid lost during the failed growth process leads to the reliable growth of relatively high quality recycled samples. In this paper we describe the adaptation of the liquid phase enrichment technique to the primary TSMG fabrication process. We further describe the observed differences between the microstructure and superconducting properties of samples grown with additional liquid-rich phase and control samples grown using a conventional TSMG process. We observe that the introduction of the additional liquid-rich phase leads to the formation of a higher concentration of Y species at the growth front, which leads, in turn, to a more uniform composition at the growth front. Importantly, the increased uniformity at the growth front leads directly to an increased homogeneity in the distribution of the Y-211 inclusions in the superconducting Y-123 phase matrix and to a more uniform Y-123 phase itself. Overall, the provision of an additional liquid-rich phase improves significantly both the reliability of grain growth through the sample thickness and the magnitude and homogeneity of the superconducting properties of these samples compared to those fabricated by a conventional TSMG process.
NASA Technical Reports Server (NTRS)
Fabiniak, R. C.; Fabiniak, T. J.
1971-01-01
The results of experiments 1, 2, and 10 of the Apollo 14 composite casting demonstration are discussed. The purpose of the demonstration, with regard to samples 1 and 2, was to obtain preliminary data on the liquid phase sintering process in a weightless environment. With regard to sample 10, the purpose was to obtain preliminary information on how to achieve uniform dispersion of dense particles on a metal matrix by employing shaking modes or forces in the system when the metal matrix is molten. Results of the demonstrations were interpreted in a quantitative and qualitative manner. For experiment 1 it was found that the tungsten particles were redistributed more uniformly in the flight sample than in the control sample. Experiment 2 results indicate that complete melting may not have occured and thus a high degree of significance cannot be associated with the qualitative results relating to particle redistribution data. The particle-matrix system of experiment 10 was found to be nonwetting.
Katz, Brian G.; Krulikas, Richard K.
1979-01-01
Water samples from wells in Nassau and Suffolk Counties were analyzed for chloride and nitrate. Two samples were collected at each well; one was analyzed by the U.S. Geological Survey, the other by a laboratory in the county from which the sample was taken. Results were compared statistically by paired-sample t-test to indicate the degree of uniformity among laboratory results. Chloride analyses from one of the three county laboratories differed significantly (0.95 confidence level) from that of a Geological Survey laboratory. For nitrate analyses, a significant difference (0.95 confidence level) was noted between results from two of the three county laboratories and the Geological Survey laboratory. The lack of uniformity among results reported by the participating laboratories indicates a need for continuing participation in a quality-assurance program and exercise of strong quality control from time of sample collection through analysis so that differences can be evaluated. (Kosco-USGS)
NASA Astrophysics Data System (ADS)
Keiser, Dennis D.; Jue, Jan-Fong; Woolstenhulme, Nicolas E.; Ewh, Ashley
2011-12-01
Low-enriched uranium-molybdenum (U-Mo) alloy particles dispersed in aluminum alloy (e.g., dispersion fuels) are being developed for application in research and test reactors. To achieve the best performance of these fuels during irradiation, optimization of the starting microstructure may be required by utilizing a heat treatment that results in the formation of uniform, Si-rich interaction layers between the U-Mo particles and Al-Si matrix. These layers behave in a stable manner under certain irradiation conditions. To identify the optimum heat treatment for producing these kinds of layers in a dispersion fuel plate, a systematic annealing study has been performed using actual dispersion fuel samples, which were fabricated at relatively low temperatures to limit the growth of any interaction layers in the samples prior to controlled heat treatment. These samples had different Al matrices with varying Si contents and were annealed between 450 and 525 °C for up to 4 h. The samples were then characterized using scanning electron microscopy (SEM) to examine the thickness, composition, and uniformity of the interaction layers. Image analysis was performed to quantify various attributes of the dispersion fuel microstructures that related to the development of the interaction layers. The most uniform layers were observed to form in fuel samples that had an Al matrix with at least 4 wt.% Si and a heat treatment temperature of at least 475 °C.
Output Beam Polarisation of X-ray Lasers with Transient Inversion
NASA Astrophysics Data System (ADS)
Janulewicz, K. A.; Kim, C. M.; Matouš, B.; Stiel, H.; Nishikino, M.; Hasegawa, N.; Kawachi, T.
It is commonly accepted that X-ray lasers, as the devices based on amplified spontaneous emission (ASE), did not show any specific polarization in the output beam. The theoretical analysis within the uniform (single-mode) approximation suggested that the output radiation should show some defined polarization feature, but randomly changing from shot-to-shot. This hypothesis has been verified by experiment using traditional double-pulse scheme of transient inversion. Membrane beam-splitter was used as a polarization selector. It was found that the output radiation has a significant component of p-polarisation in each shot. To explain the effect and place it in the line with available, but scarce data, propagation and kinetic effects in the non-uniform plasma have been analysed.
Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen
2009-03-01
The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
Transport properties of random media: A new effective medium theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, K.; Soukoulis, C.M.
We present a new method for efficient, accurate calculations of transport properties of random media. It is based on the principle that the wave energy density should be uniform when averaged over length scales larger than the size of the scatterers. This scheme captures the effects of resonant scattering of the individual scatterer exactly, as well as the multiple scattering in a mean-field sense. It has been successfully applied to both ``scalar`` and ``vector`` classical wave calculations. Results for the energy transport velocity are in agreement with experiment. This approach is of general use and can be easily extended tomore » treat different types of wave propagation in random media. {copyright} {ital 1995} {ital The} {ital American} {ital Physical} {ital Society}.« less
Principle of Parsimony, Fake Science, and Scales
NASA Astrophysics Data System (ADS)
Yeh, T. C. J.; Wan, L.; Wang, X. S.
2017-12-01
Considering difficulties in predicting exact motions of water molecules, and the scale of our interests (bulk behaviors of many molecules), Fick's law (diffusion concept) has been created to predict solute diffusion process in space and time. G.I. Taylor (1921) demonstrated that random motion of the molecules reach the Fickian regime in less a second if our sampling scale is large enough to reach ergodic condition. Fick's law is widely accepted for describing molecular diffusion as such. This fits the definition of the parsimony principle at the scale of our concern. Similarly, advection-dispersion or convection-dispersion equation (ADE or CDE) has been found quite satisfactory for analysis of concentration breakthroughs of solute transport in uniformly packed soil columns. This is attributed to the solute is often released over the entire cross-section of the column, which has sampled many pore-scale heterogeneities and met the ergodicity assumption. Further, the uniformly packed column contains a large number of stationary pore-size heterogeneity. The solute thus reaches the Fickian regime after traveling a short distance along the column. Moreover, breakthrough curves are concentrations integrated over the column cross-section (the scale of our interest), and they meet the ergodicity assumption embedded in the ADE and CDE. To the contrary, scales of heterogeneity in most groundwater pollution problems evolve as contaminants travel. They are much larger than the scale of our observations and our interests so that the ergodic and the Fickian conditions are difficult. Upscaling the Fick's law for solution dispersion, and deriving universal rules of the dispersion to the field- or basin-scale pollution migrations are merely misuse of the parsimony principle and lead to a fake science ( i.e., the development of theories for predicting processes that can not be observed.) The appropriate principle of parsimony for these situations dictates mapping of large-scale heterogeneities as detailed as possible and adapting the Fick's law for effects of small-scale heterogeneity resulting from our inability to characterize them in detail.
Uniform Corrosion and General Dissolution of Aluminum Alloys 2024-T3, 6061-T6, and 7075-T6
NASA Astrophysics Data System (ADS)
Huang, I.-Wen
Uniform corrosion and general dissolution of aluminum alloys was not as well-studied in the past, although it was known for causing significant amount of weight loss. This work comprises four chapters to understand uniform corrosion of aluminum alloys 2024-T3, 6061-T6, and 7075-T6. A preliminary weight loss experiment was performed for distinguishing corrosion induced weight loss attributed to uniform corrosion and pitting corrosion. The result suggested that uniform corrosion generated a greater mass loss than pitting corrosion. First, to understand uniform corrosion mechanism and kinetics in different environments, a series of static immersion tests in NaCl solutions were performed to provide quantitative measurement of uniform corrosion. Thereafter, uniform corrosion development as a function of temperature, pH, Cl-, and time was investigated to understand the influence of environmental factors. Faster uniform corrosion rate has been found at lower temperature (20 and 40°C) than at higher temperature (60 and 80°C) due to accelerated corrosion product formation at high temperatures inhibiting corrosion reactions. Electrochemical tests including along with scanning electron microscopy (SEM) were utilized to study the temperature effect. Second, in order to further understand the uniform corrosion influence on pit growth kinetics, a long term exposures for 180 days in both immersion and ASTM-B117 test were performed. Uniform corrosion induced surface recession was found to have limited impact on pit geometry regardless of exposure methods. It was also found that the competition for limited cathodic current from uniform corrosion the primary rate limiting factor for pit growth. Very large pits were found after uniform corrosion growth reached a plateau due to corrosion product coverage. Also, optical microscopy and focused ion beam (FIB) imaging has provided more insights of distinctive pitting geometry and subsurface damages found from immersion samples and B117 samples. Although uniform corrosion was studied in various electrolytes, the pH impact was still difficult to discern due to ongoing cathodic reactions that changed electrolyte pH with time. Therefore, buffered pH electrolytes with pH values of 3, 5, 8, and 10 were prepared static immersion tests. Electrochemical experiments were performed in each buffered pH conditions for understanding corrosion mechanisms. Uniform corrosion was found exhibiting higher corrosion rate in buffered acidic and alkaline electrolytes due to pH- and temperature-dependent corrosion product precipitation. Observations were supported by electrochemical, SEM, and EDS observations. Due to the complexity of corrosion data, a reliable corrosion prediction based on empirical observations could be challenging. Artificial neural network (ANN) modeling was used for corrosion data pattern recognition by mimicking human neural network systems. Predictive models were developed based on corrosion data acquired in this study. The model was adaptable through iteratively update its prediction by error minimization during the training phase. Trained ANN model can predict uniform corrosion successfully. In addition to ANN, fuzzy curve analysis was utilized to rank the influence of each input (temperature, pH, Cl-, and time). For example, temperature and pH were found to be the most influential parameters to uniform corrosion. This information can provide feedback for ANN improvement, also known as "data pruning".
Dynamical properties of the S =1/2 random Heisenberg chain
NASA Astrophysics Data System (ADS)
Shu, Yu-Rong; Dupont, Maxime; Yao, Dao-Xin; Capponi, Sylvain; Sandvik, Anders W.
2018-03-01
We study dynamical properties at finite temperature (T ) of Heisenberg spin chains with random antiferromagnetic exchange couplings, which realize the random singlet phase in the low-energy limit, using three complementary numerical methods: exact diagonalization, matrix-product-state algorithms, and stochastic analytic continuation of quantum Monte Carlo results in imaginary time. Specifically, we investigate the dynamic spin structure factor S (q ,ω ) and its ω →0 limit, which are closely related to inelastic neutron scattering and nuclear magnetic resonance (NMR) experiments (through the spin-lattice relaxation rate 1 /T1 ). Our study reveals a continuous narrow band of low-energy excitations in S (q ,ω ) , extending throughout the q space, instead of being restricted to q ≈0 and q ≈π as found in the uniform system. Close to q =π , the scaling properties of these excitations are well captured by the random-singlet theory, but disagreements also exist with some aspects of the predicted q dependence further away from q =π . Furthermore we also find spin diffusion effects close to q =0 that are not contained within the random-singlet theory but give non-negligible contributions to the mean 1 /T1 . To compare with NMR experiments, we consider the distribution of the local relaxation rates 1 /T1 . We show that the local 1 /T1 values are broadly distributed, approximately according to a stretched exponential. The mean 1 /T1 first decreases with T , but below a crossover temperature it starts to increase and likely diverges in the limit of a small nuclear resonance frequency ω0. Although a similar divergent behavior has been predicted and experimentally observed for the static uniform susceptibility, this divergent behavior of the mean 1 /T1 has never been experimentally observed. Indeed, we show that the divergence of the mean 1 /T1 is due to rare events in the disordered chains and is concealed in experiments, where the typical 1 /T1 value is accessed.
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Qin, Fei; Meng, Zi-Ming; Zhong, Xiao-Lan; Liu, Ye; Li, Zhi-Yuan
2012-06-04
We present a versatile technique based on nano-imprint lithography to fabricate high-quality semiconductor-polymer compound nonlinear photonic crystal (NPC) slabs. The approach allows one to infiltrate uniformly polystyrene materials that possess large Kerr nonlinearity and ultrafast nonlinear response into the cylindrical air holes with diameter of hundred nanometers that are perforated in silicon membranes. Both the structural characterization via the cross-sectional scanning electron microscopy images and the optical characterization via the transmission spectrum measurement undoubtedly show that the fabricated compound NPC samples have uniform and dense polymer infiltration and are of high quality in optical properties. The compound NPC samples exhibit sharp transmission band edges and nondegraded high quality factor of microcavities compared with those in the bare silicon PC. The versatile method can be expanded to make general semiconductor-polymer hybrid optical nanostructures, and thus it may pave the way for reliable and efficient fabrication of ultrafast and ultralow power all-optical tunable integrated photonic devices and circuits.
USDA-ARS?s Scientific Manuscript database
The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 86th year in 2016. The nursery contained 26 entries submitted by 8 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ...
Electromagnetic properties of material coated surfaces
NASA Technical Reports Server (NTRS)
Beard, L.; Berrie, J.; Burkholder, R.; Dominek, A.; Walton, E.; Wang, N.
1989-01-01
The electromagnetic properties of material coated conducting surfaces were investigated. The coating geometries consist of uniform layers over a planar surface, irregularly shaped formations near edges and randomly positioned, electrically small, irregularly shaped formations over a surface. Techniques to measure the scattered field and constitutive parameters from these geometries were studied. The significance of the scattered field from these geometries warrants further study.
Effects of Model Characteristics on Observational Learning of Inmates in a Pre-Release Center.
ERIC Educational Resources Information Center
Fliegel, Alan B.
Subjects were 138 inmates from the pre-release unit of a Southwestern prison system, randomly divided into three groups of 46 each. Each group viewed a video-taped model delivering a speech. The independent variable had three levels: (1) lecturer attired in a shirt and tie; (2) lecturer attired in a correctional officer's uniform; and (3) model…
USDA-ARS?s Scientific Manuscript database
The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 84th year in 2014. The nursery contained 26 entries submitted by 6 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ex...
NASA Astrophysics Data System (ADS)
Istiqomah, L.; Sakti, A. A.; Suryani, A. E.; Karimy, M. F.; Anggraeni, A. S.; Herdian, H.
2017-12-01
The objective of this study was to evaluate the effect of feed supplement (FS) contained earthworm meal (EWM) on production performance of laying quails. Twenty weeks-old of 360 Coturnix coturnix japonica quails were used in a Completely Randomized Design (CRD) with three dietary treatments A = CD (control without FS), B = CD + 0.250 % of FS, and C = CD + 0.375 % of FS during 6 weeks of experimental period. Each treatment in 4 equal replicates in which 30 quails were randomly allocated into 12 units of cages. Variable measured were feed intake, feed conversion ratio, feed efficiency, mortality rate, hen day production, egg weight, and egg uniformity. Data were statistically analyzed by One Way ANOVA and the differences among mean treatments are analysed using Duncan’s Multiple Range Test (DMRT). The results showed that administration of 0.375% FS based on earthworm meal, fermented rice bran, and skim milk impaired the feed conversion ratio and increased the feed efficiency. The experimental treatments did not effect on feed intake, mortality, hen day production, egg weight, and egg uniformity of quail. It is concluded that administration of feed supplement improved the growth performance of quail.
NASA Astrophysics Data System (ADS)
Munjal, Sandeep; Khare, Neeraj
2018-02-01
Controlled bipolar resistive switching (BRS) has been observed in nanostructured CoFe2O4 (CFO) films using an Al (aluminum)/CoFe2O4/FTO (fluorine-doped tin oxide) device. The fabricated device shows electroforming-free uniform BRS with two clearly distinguished and stable resistance states without any application of compliance current, with a resistance ratio of the high resistance state (HRS) and the low resistance state (LRS) of >102. Small switching voltage (<1 volt) and lower current in both the resistance states confirm the fabrication of a low power consumption device. In the LRS, the conduction mechanism was found to be Ohmic in nature, while the high-resistance state (HRS/OFF state) was governed by the space charge-limited conduction mechanism, which indicates the presence of an interfacial layer with an imperfect microstructure near the top Al/CFO interface. The device shows nonvolatile behavior with good endurance properties, an acceptable resistance ratio, uniform resistive switching due to stable, less random filament formation/rupture, and a control over the resistive switching properties by choosing different stop voltages, which makes the device suitable for its application in future nonvolatile resistive random access memory.
Esposito-Smythers, Christianne; Hadley, Wendy; Curby, Timothy W; Brown, Larry K
2017-02-01
Adolescents with mental health conditions represent a high-risk group for substance use, deliberate self-harm (DSH), and risky sexual behavior. Mental health treatment does not uniformly decrease these risks. Effective prevention efforts are needed to offset the developmental trajectory from mental health problems to these behaviors. This study tested an adjunctive cognitive-behavioral family-based alcohol, DSH, and HIV prevention program (ASH-P) for adolescents in mental healthcare. A two group randomized design was used to compare ASH-P to an assessment only control (AO-C). Participants included 81 adolescents and a parent. Assessments were completed at pre-intervention as well as 1, 6, and 12-months post-enrollment, and included measures of family-based mechanisms and high-risk behaviors. ASH-P relative to AO-C was associated with greater improvements in most family process variables (perceptions of communication and parental disapproval of alcohol use and sexual behavior) as well as less DSH and greater refusal of sex to avoid a sexually transmitted infection. It also had a moderate (but non-significant) effect on odds of binge drinking. No differences were found in suicidal ideation, alcohol use, or sexual intercourse. ASH-P showed initial promise in preventing multiple high-risk behaviors. Further testing of prevention protocols that target multiple high-risk behaviors in clinical samples is warranted. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cho, Hyunsan; Hallfors, Denise D; Mbai, Isabella I; Itindi, Janet; Milimo, Benson W; Halpern, Carolyn T; Iritani, Bonita J
2011-05-01
We report the findings from a pilot study in western Kenya, using an experimental design to test whether comprehensive support used to keep adolescent orphans in school can reduce risk factors associated with infection with human immunodeficiency virus. Adolescent orphans aged 12-14 years (N = 105) in Nyanza Province were randomized to condition, after stratifying by household, gender, and baseline survey report of sexual behavior. The intervention comprised school fees, uniforms, and a "community visitor" who monitored school attendance and helped to resolve problems that would lead to absence or dropout. Data were analyzed using generalized estimating equations over two time points, controlling for gender and age. Compared with the control group, intervention students were less likely to drop out of school, commence sexual intercourse, or report attitudes supporting early sex. School support also increased prosocial bonding and gender equity attitudes. After 1 year of exposure to the intervention, we found evidence suggesting that comprehensive school support can prevent school dropout, delay sexual debut, and reduce risk factors associated with infection with human immunodeficiency virus. Further research, with much larger samples, is needed to better understand factors that mediate the association between educational support and delayed sexual debut, and how gender might moderate these relationships. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Andersen, Peter A; Buller, David B; Walkosz, Barbara J; Scott, Michael D; Beck, Larry; Liu, Xia; Abbott, Allison; Eye, Rachel; Cutter, Gary
2017-12-01
Taking vacations in sunny locations is associated with the development of skin cancer. This study tested a multi-component sun protection intervention based on diffusion of innovations theory and transportation theory designed to increase vacationers' comprehensive sun protection, i.e., use of clothing, hats, and shade, and use, pre-application, and reapplication of sunscreen. The trial enrolled 41 warm weather resorts in North America in a pair-matched group randomized pretest-posttest design and assessed samples of adult vacationers at resort outdoor recreation venues regarding sun protection at pretest (n = 3,531) and posttest (n = 3,226). While results showed no overall effect of the intervention on comprehensive sun protection across venues, the intervention produced statistically significant improvements in sun protection at waterside venues (pools and beaches). The intervention's overall effects may have been impeded by a lack of uniformly robust implementation, low interest in skin cancer prevention by guests, or shortcomings of the theories used to create prevention messages. The intervention may have worked best with guests in the highest-risk recreation venue, i.e., waterside recreation where they exposed the most skin. Alternative approaches that alter resort organizations, such as through changes in policy, environmental features, or occupational efforts might be more effective than targeting vacationers with behavior-change messages.
High density, uniformly distributed W/UO2 for use in Nuclear Thermal Propulsion
NASA Astrophysics Data System (ADS)
Tucker, Dennis S.; Barnes, Marvin W.; Hone, Lance; Cook, Steven
2017-04-01
An inexpensive, quick method has been developed to obtain uniform distributions of UO2 particles in a tungsten matrix utilizing 0.5 wt percent low density polyethylene. Powders were sintered in a Spark Plasma Sintering (SPS) furnace at 1600 °C, 1700 °C, 1750 °C, 1800 °C and 1850 °C using a modified sintering profile. This resulted in a uniform distribution of UO2 particles in a tungsten matrix with high densities, reaching 99.46% of theoretical for the sample sintered at 1850 °C. The powder process is described and the results of this study are given below.
The prior statistics of object colors.
Koenderink, Jan J
2010-02-01
The prior statistics of object colors is of much interest because extensive statistical investigations of reflectance spectra reveal highly non-uniform structure in color space common to several very different databases. This common structure is due to the visual system rather than to the statistics of environmental structure. Analysis involves an investigation of the proper sample space of spectral reflectance factors and of the statistical consequences of the projection of spectral reflectances on the color solid. Even in the case of reflectance statistics that are translationally invariant with respect to the wavelength dimension, the statistics of object colors is highly non-uniform. The qualitative nature of this non-uniformity is due to trichromacy.
The purpose of this SOP is to establish a uniform procedure for the collection of indoor floor dust samples in the field. This procedure was followed to ensure consistent data retrieval of dust samples during the Arizona NHEXAS project and the "Border" study. Keywords: field; va...
Reduction of display artifacts by random sampling
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.
1983-01-01
The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.
Templated assembly of BiFeO3 nanocrystals into 3D mesoporous networks for catalytic applications
NASA Astrophysics Data System (ADS)
Papadas, I. T.; Subrahmanyam, K. S.; Kanatzidis, M. G.; Armatas, G. S.
2015-03-01
The self-assembly of uniform nanocrystals into large porous architectures is currently of immense interest for nanochemistry and nanotechnology. These materials combine the respective advantages of discrete nanoparticles and mesoporous structures. In this article, we demonstrate a facile nanoparticle templating process to synthesize a three-dimensional mesoporous BiFeO3 material. This approach involves the polymer-assisted aggregating assembly of 3-aminopropanoic acid-stabilized bismuth ferrite (BiFeO3) nanocrystals followed by thermal decomposition of the surfactant. The resulting material consists of a network of tightly connected BiFeO3 nanoparticles (~6-7 nm in diameter) and has a moderately high surface area (62 m2 g-1) and uniform pores (ca. 6.3 nm). As a result of the unique mesostructure, the porous assemblies of BiFeO3 nanoparticles show an excellent catalytic activity and chemical stability for the reduction of p-nitrophenol to p-aminophenol with NaBH4.The self-assembly of uniform nanocrystals into large porous architectures is currently of immense interest for nanochemistry and nanotechnology. These materials combine the respective advantages of discrete nanoparticles and mesoporous structures. In this article, we demonstrate a facile nanoparticle templating process to synthesize a three-dimensional mesoporous BiFeO3 material. This approach involves the polymer-assisted aggregating assembly of 3-aminopropanoic acid-stabilized bismuth ferrite (BiFeO3) nanocrystals followed by thermal decomposition of the surfactant. The resulting material consists of a network of tightly connected BiFeO3 nanoparticles (~6-7 nm in diameter) and has a moderately high surface area (62 m2 g-1) and uniform pores (ca. 6.3 nm). As a result of the unique mesostructure, the porous assemblies of BiFeO3 nanoparticles show an excellent catalytic activity and chemical stability for the reduction of p-nitrophenol to p-aminophenol with NaBH4. Electronic supplementary information (ESI) available: IR spectra and TG profiles of as-made BiFeO3 NPs and MBFA samples, TEM images of 3-APA-capped BiFeO3 NPs, EDS spectrum of MBFAs, N2 adsorption-desorption isotherms of randomly aggregated BiFeO3 NPs and catalytic data for 4-NP reduction by MBFAs and other nanostructured catalysts. See DOI: 10.1039/c5nr00185d
Temporary traffic control handbook for local agencies : tech transfer summary.
DOT National Transportation Integrated Search
2016-03-01
The updated handbook provides local agencies with uniform standards for temporary traffic control. The handbook includes sample layouts that can be used on various projects. Having sample layouts will provide a cost savings to agencies because the de...
The current impact flux on Mars and its seasonal variation
NASA Astrophysics Data System (ADS)
JeongAhn, Youngmin; Malhotra, Renu
2015-12-01
We calculate the present-day impact flux on Mars and its variation over the martian year, using the current data on the orbital distribution of known Mars-crossing minor planets. We adapt the Öpik-Wetherill formulation for calculating collision probabilities, paying careful attention to the non-uniform distribution of the perihelion longitude and the argument of perihelion owed to secular planetary perturbations. We find that, at the current epoch, the Mars crossers have an axial distribution of the argument of perihelion, and the mean direction of their eccentricity vectors is nearly aligned with Mars' eccentricity vector. These previously neglected angular non-uniformities have the effect of depressing the mean annual impact flux by a factor of about 2 compared to the estimate based on a uniform random distribution of the angular elements of Mars-crossers; the amplitude of the seasonal variation of the impact flux is likewise depressed by a factor of about 4-5. We estimate that the flux of large impactors (of absolute magnitude H < 16) within ±30° of Mars' aphelion is about three times larger than when the planet is near perihelion. Extrapolation of our results to a model population of meter-size Mars-crossers shows that if these small impactors have a uniform distribution of their angular elements, then their aphelion-to-perihelion impact flux ratio would be 11-15, but if they track the orbital distribution of the large impactors, including their non-uniform angular elements, then this ratio would be about 3. Comparison of our results with the current dataset of fresh impact craters on Mars (detected with Mars-orbiting spacecraft) appears to rule out the uniform distribution of angular elements.
ERIC Educational Resources Information Center
Gülle, Mahmut; Beyleroglu, Malik; Hazar, Muhsin
2016-01-01
The purpose of the present study is to elucidate the relationship between performance impacts of red and blue colors on uniforms of young boxers and competition results. The study universe was consisted of 650 competitions organized in the scope of 2005-2006 Sakarya City Young Men Boxing Championship by the Turkey Boxing Federation. Sampling of…
A New Method for Determining Permethrin Level on Military Uniform Fabrics
2017-06-01
new desorption- gas chromatography–mass spectrometry based screening tool for permethrin content in military fabrics was developed. The method allows...SUBJECT TERMS permethrin, Army Combat Uniform, ACU, camouflage, desorption- gas chromatography-mass spectrometry, D-GC-MS 16. SECURITY CLASSIFICATION OF...and the permethrin contained in the specimens is extracted with solvent with a recovery rate of at least 95%. Samples are analyzed using a gas
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Chawla, Sanjeev; Nagarajan, Rajakumar; Iqbal, Zohaib; Albert Thomas, M.; Poptani, Harish
2017-04-01
Two-dimensional localized correlated spectroscopy (2D L-COSY) offers greater spectral dispersion than conventional one-dimensional (1D) MRS techniques, yet long acquisition times and limited post-processing support have slowed its clinical adoption. Improving acquisition efficiency and developing versatile post-processing techniques can bolster the clinical viability of 2D MRS. The purpose of this study was to implement a non-uniformly weighted sampling (NUWS) scheme for faster acquisition of 2D-MRS. A NUWS 2D L-COSY sequence was developed for 7T whole-body MRI. A phantom containing metabolites commonly observed in the brain at physiological concentrations was scanned ten times with both the NUWS scheme of 12:48 duration and a 17:04 constant eight-average sequence using a 32-channel head coil. 2D L-COSY spectra were also acquired from the occipital lobe of four healthy volunteers using both the proposed NUWS and the conventional uniformly-averaged L-COSY sequence. The NUWS 2D L-COSY sequence facilitated 25% shorter acquisition time while maintaining comparable SNR in humans (+0.3%) and phantom studies (+6.0%) compared to uniform averaging. NUWS schemes successfully demonstrated improved efficiency of L-COSY, by facilitating a reduction in scan time without affecting signal quality.
A distributed scheduling algorithm for heterogeneous real-time systems
NASA Technical Reports Server (NTRS)
Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi
1991-01-01
Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.
Phase transition in nonuniform Josephson arrays: Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Pomirchy, L. M.
1994-01-01
Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.
Yang, Liang; Wang, Simin; Lv, Zhicheng; Liu, Sheng
2013-04-01
An advanced phosphor conformal coating technology is proposed, good correlated color temperature (CCT) and chromaticity uniformity samples are fabricated through phosphor spray painting technology. Spray painting technology is also suitable for phosphor conformal coating of whole LED wafers. The samples of different CCTs are obtained through controlling the phosphor film thickness in the range of 6-80 μm; CCT variation of samples can be controlled in the range of ±200 K. The experimental Δuv reveals that the spray painting method can obtain a much smaller CCT variation (Δuv of 1.36e(-3)) than the conventional dispensing method (Δuv of 11.86e(-3)) when the light is emitted at angles from -90° to +90°, and chromaticity area uniformity is also improved significantly.
Revisiting sample size: are big trials the answer?
Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J
2012-07-18
The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.
Follow-up of a report of a potential linkage for schizophrenia on chromosome 22q12-q13.1: Part 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulver, A.E.; Lasseter, V.K.; Wolyniec, P.
A collaboration involving four groups of investigators (Johns Hopkins University/Massachusetts Institute of Technology; Medical College of Virginia/The Health Research Board, Dublin; Institute of Psychiatry, London/University of Wales, Cardiff; Centre National de la Recherche Scientifique, Paris) was organized to confirm results suggestive of a schizophrenia susceptibility locus on chromosome 22 identified by the JHU/MIT group after a random search of the genome. Diagnostic, laboratory, and analytical reliability exercises were conducted among the groups to ensure uniformity of procedures. Data from genotyping of 3 dinucleotide repeat polymorphisms (at the loci D22S268, IL2RB, D22S307) for a combined replication sample of 256 families, eachmore » having 2 or more affected individuals with DNA, were analysed using a complex autosomal dominant model. This study provided no evidence for linkage or heterogeneity for the region 22q12-q13 under this model. We conclude that if this region confers susceptibility to schizophrenia, it must be in only a small proportion of families. Collaborative efforts to obtain large samples must continue to play an important role in the genetic search for clues to complex psychiatric disorders such as schizophrenia. 32 refs., 3 tabs.« less
NASA Astrophysics Data System (ADS)
Zhang, R. F.; Chang, W. H.; Jiang, L. F.; Qu, B.; Zhang, S. F.; Qiao, L. P.; Xiang, J. H.
2016-04-01
Micro-arc oxidation (MAO) is an effective method to produce ceramic coatings on magnesium alloys and can considerably improve their corrosion resistance. The coating properties are closely related with microcracks, which are always inevitably developed on the coating surface. In order to find out the formation and development regularity of microcracks, anodic coatings developed on two-phase AZ91HP after different anodizing times were fabricated in a solution containing environmentally friendly organic electrolyte phytic acid. The results show that anodic film is initially developed on the α phase. At 50 s, anodic coatings begin to develop on the β phase, evidencing the formation of a rough area. Due to the coating successive development, the microcracks initially appear at the boundary between the initially formed coating on the α phase and the subsequently developed coating on the β phase. With the prolonging treatment time, the microcracks near the β phase become evident. After treating for 3 min, the originally rough area on the β phase disappears and the coatings become almost uniform with microcracks randomly distributed on the sample surface. Inorganic phosphates are found in MAO coatings, suggesting that phytate salts are decomposed due to the high instantaneous temperature on the sample surface resulted from spark discharge.
Bayes classification of interferometric TOPSAR data
NASA Technical Reports Server (NTRS)
Michel, T. R.; Rodriguez, E.; Houshmand, B.; Carande, R.
1995-01-01
We report the Bayes classification of terrain types at different sites using airborne interferometric synthetic aperture radar (INSAR) data. A Gaussian maximum likelihood classifier was applied on multidimensional observations derived from the SAR intensity, the terrain elevation model, and the magnitude of the interferometric correlation. Training sets for forested, urban, agricultural, or bare areas were obtained either by selecting samples with known ground truth, or by k-means clustering of random sets of samples uniformly distributed across all sites, and subsequent assignments of these clusters using ground truth. The accuracy of the classifier was used to optimize the discriminating efficiency of the set of features that was chosen. The most important features include the SAR intensity, a canopy penetration depth model, and the terrain slope. We demonstrate the classifier's performance across sites using a unique set of training classes for the four main terrain categories. The scenes examined include San Francisco (CA) (predominantly urban and water), Mount Adams (WA) (forested with clear cuts), Pasadena (CA) (urban with mountains), and Antioch Hills (CA) (water, swamps, fields). Issues related to the effects of image calibration and the robustness of the classification to calibration errors are explored. The relative performance of single polarization Interferometric data classification is contrasted against classification schemes based on polarimetric SAR data.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
FIELD SAMPLING OF RESIDUAL AVIATION GASOLINE IN SANDY SOIL
Two complimentary field sampling methods for the determination of residual aviation gasoline content in the contaminated capillary fringe of a fine, uniform, sandy soil were investigated. The first method featured filed extrusion of core barrels into pint size Mason jars, while ...
The weak acid nature of precipitation
John O. Frohliger; Robert L. Kane
1976-01-01
Recent measurements of the pH of precipitation leave no doubt that rainfall is acidic. Evidence will be presented that precipitation is a weak acid system. The results of this research indicate the need to establish standard sampling procedures to provide uniform sampling of precipitation
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
NASA Astrophysics Data System (ADS)
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
Formability analysis of sheet metals by cruciform testing
NASA Astrophysics Data System (ADS)
Güler, B.; Alkan, K.; Efe, M.
2017-09-01
Cruciform biaxial tests are increasingly becoming popular for testing the formability of sheet metals as they achieve frictionless, in-plane, multi-axial stress states with a single sample geometry. However, premature fracture of the samples during testing prevents large strain deformation necessary for the formability analysis. In this work, we introduce a miniature cruciform sample design (few mm test region) and a test setup to achieve centre fracture and large uniform strains. With its excellent surface finish and optimized geometry, the sample deforms with diagonal strain bands intersecting at the test region. These bands prevent local necking and concentrate the strains at the sample centre. Imaging and strain analysis during testing confirm the uniform strain distributions and the centre fracture are possible for various strain paths ranging from plane-strain to equibiaxial tension. Moreover, the sample deforms without deviating from the predetermined strain ratio at all test conditions, allowing formability analysis under large strains. We demonstrate these features of the cruciform test for three sample materials: Aluminium 6061-T6 alloy, DC-04 steel and Magnesium AZ31 alloy, and investigate their formability at both the millimetre scale and the microstructure scale.
2013-01-01
Background The use of substandard and degraded medicines is a major public health problem in developing countries such as Cambodia. A collaborative study was conducted to evaluate the quality of amoxicillin–clavulanic acid preparations under tropical conditions in a developing country. Methods Amoxicillin-clavulanic acid tablets were obtained from outlets in Cambodia. Packaging condition, printed information, and other sources of information were examined. The samples were tested for quantity, content uniformity, and dissolution. Authenticity was verified with manufacturers and regulatory authorities. Results A total of 59 samples were collected from 48 medicine outlets. Most (93.2%) of the samples were of foreign origin. Using predetermined acceptance criteria, 12 samples (20.3%) were non-compliant. Eight (13.6%), 10 (16.9%), and 20 (33.9%) samples failed quantity, content uniformity, and dissolution tests, respectively. Samples that violated our observational acceptance criteria were significantly more likely to fail the quality tests (Fisher’s exact test, p < 0.05). Conclusions Improper packaging and storage conditions may reduce the quality of amoxicillin–clavulanic acid preparations at community pharmacies. Strict quality control measures are urgently needed to maintain the quality of amoxicillin–clavulanic acid in tropical countries. PMID:23773420
Standardized Sample Preparation Using a Drop-on-Demand Printing Platform
2013-05-07
successful and robust methodology for energetic sample preparation. Keywords: drop-on-demand; inkjet printing; sample preparation OPEN ACCESS...on a similar length scale. Recently, drop-on-demand inkjet printing technology has emerged as an effective approach to produce test materials to...which most of the material is concentrated along the edges, samples prepared using drop-on-demand inkjet technology demonstrate excellent uniform
The purpose of this SOP is to establish a uniform procedure for the collection of indoor floor dust samples in the field. This procedure was followed to ensure consistent data retrieval of dust samples during the Arizona NHEXAS project and the Border study. Keywords: field; vacu...
Fast self contained exponential random deviate algorithm
NASA Astrophysics Data System (ADS)
Fernández, Julio F.
1997-03-01
An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.
Directed Random Markets: Connectivity Determines Money
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; López-Ruiz, Ricardo
2013-12-01
Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.
Visible digital watermarking system using perceptual models
NASA Astrophysics Data System (ADS)
Cheng, Qiang; Huang, Thomas S.
2001-03-01
This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.
The purpose of this SOP is to establish a uniform procedure for the collection of yard composite soil samples in the field. This procedure was followed to ensure consistent and reliable collection of outdoor soil samples during the Arizona NHEXAS project and the "Border" study. ...
The purpose of this SOP is to establish a uniform procedure for the collection of residential foundation soil samples in the field. This procedure was followed to ensure consistent and reliable collection of outdoor soil samples during the Arizona NHEXAS project and the "Border"...
Exploring diversity in ensemble classification: Applications in large area land cover mapping
NASA Astrophysics Data System (ADS)
Mellor, Andrew; Boukir, Samia
2017-07-01
Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area remote sensing applications, for which training data is costly and resource intensive to collect.
Inconsistencies in emergency instructions on common household product labels.
Cantrell, F Lee; Nordt, Sean Patrick; Krauss, Jamey R
2013-10-01
Human exposures to non-pharmaceutical products often results in serious injury and death annually in the United States. Studies performed more than 25 years ago described inadequate first aid advice on the majority of household products. The current study evaluates contemporary non-pharmaceutical products with respect to location, uniformity and type of their first aid and emergency contact instructions. A random, convenience sample of commercial product label information was obtained from local retail stores over an 8 month period. Twelve common non-pharmaceutical product categories, with large numbers of annual human exposures, were identified from National Poison Data Systems data. A minimum of 10 unique products for each category utilized. The following information identified: product name and manufacturer, location on container, presence and type of route-specific treatment, medical assistance referral information. A total of 259 product labels were examined. First aid/contact information was located on container: rear 162 (63 %), side 28 (11 %), front 3 (1 %), bottom 2 (0.77 %), behind label 14 (5 %), missing entirely 50 (19 %). Fifty-five products (21 %) lacked any first aid instructions. Suggested contacts for accidental poisoning: none listed 75 (29 %), physician 144 (56 %), poison control centers 102 (39 %), manufacturer 44 (17 %), "Call 911" 10 (4 %). Suggested contacts for unintentional exposure and content of first aid instructions on household products were inconsistent, frequently incomplete and at times absent. Instruction locations similarly lacked uniformity. Household product labels need to provide concise, accurate first aid and emergency contact instructions in easy-to-understand language in a universal format on product labels.
Connolly, Brian; Matykiewicz, Pawel; Bretonnel Cohen, K; Standridge, Shannon M; Glauser, Tracy A; Dlugos, Dennis J; Koh, Susan; Tham, Eric; Pestian, John
2014-01-01
The constant progress in computational linguistic methods provides amazing opportunities for discovering information in clinical text and enables the clinical scientist to explore novel approaches to care. However, these new approaches need evaluation. We describe an automated system to compare descriptions of epilepsy patients at three different organizations: Cincinnati Children's Hospital, the Children's Hospital Colorado, and the Children's Hospital of Philadelphia. To our knowledge, there have been no similar previous studies. In this work, a support vector machine (SVM)-based natural language processing (NLP) algorithm is trained to classify epilepsy progress notes as belonging to a patient with a specific type of epilepsy from a particular hospital. The same SVM is then used to classify notes from another hospital. Our null hypothesis is that an NLP algorithm cannot be trained using epilepsy-specific notes from one hospital and subsequently used to classify notes from another hospital better than a random baseline classifier. The hypothesis is tested using epilepsy progress notes from the three hospitals. We are able to reject the null hypothesis at the 95% level. It is also found that classification was improved by including notes from a second hospital in the SVM training sample. With a reasonably uniform epilepsy vocabulary and an NLP-based algorithm able to use this uniformity to classify epilepsy progress notes across different hospitals, we can pursue automated comparisons of patient conditions, treatments, and diagnoses across different healthcare settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Lobo-Lapidus, Rodrigo J; Gates, Bruce C
2010-11-02
Supported rhenium complexes were prepared from CH(3)Re(CO)(5) and dealuminated HY zeolite or NaY zeolite, each with a Si/Al atomic ratio of 30. The samples were characterized with infrared (IR) and extended X-ray absorption fine structure (EXAFS) spectroscopies. EXAFS data characterizing the sample formed by the reaction of CH(3)Re(CO)(5) with dealuminated HY zeolite show that the rhenium complexes were bonded to the zeolite frame, incorporating, on average, three carbonyl ligands per Re atom (as shown by Re-C and multiple-scattering Re-O EXAFS contributions). The IR spectra, consistent with this result, show that the supported rhenium carbonyls were bonded near aluminum sites of the zeolite, as shown by the decrease in intensity of the IR bands characterizing the acidic silanol groups resulting from the reaction of the rhenium carbonyl with the zeolite. This supported metal complex was characterized by narrow peaks in the ν(CO) region of the IR spectrum, indicating highly uniform species. In contrast, the species formed from CH(3)Re(CO)(5) on NaY zeolite lost fewer carbonyl ligands than those formed on HY zeolite and were significantly less uniform, as indicated by the greater breadth of the ν(CO) bands in the IR spectra. The results show the importance of zeolite H(+) sites for the formation of uniform supported rhenium carbonyls from CH(3)Re(CO)(5); the formation of such uniform complexes did not occur on the NaY zeolite.
Forced magnetohydrodynamic turbulence in a uniform external magnetic field
NASA Technical Reports Server (NTRS)
Hossain, M.; Vahala, G.; Montgomery, D.
1985-01-01
Two-dimensional dissipative MHD turbulence is randomly driven at small spatial scales and is studied by numerical simulation in the presence of a strong uniform external magnetic field. A behavior is observed which is apparently distinct from the inverse cascade which prevails in the absence of an external magnetic field. The magnetic spectrum becomes dominated by the three longest wavelength Alfven waves in the system allowed by the boundary conditions: those which, in a box size of edge 2 pi, have wave numbers (kx, ky) = (1, 1), and (1, -1), where the external magnetic field is in the x direction. At any given instant, one of these three modes dominates the vector potential spectrum, but they do not constitute a resonantly coupled triad. Rather, they are apparently coupled by the smaller-scale turbulence.
Generalized Effective Medium Theory for Particulate Nanocomposite Materials
Siddiqui, Muhammad Usama; Arif, Abul Fazal M.
2016-01-01
The thermal conductivity of particulate nanocomposites is strongly dependent on the size, shape, orientation and dispersion uniformity of the inclusions. To correctly estimate the effective thermal conductivity of the nanocomposite, all these factors should be included in the prediction model. In this paper, the formulation of a generalized effective medium theory for the determination of the effective thermal conductivity of particulate nanocomposites with multiple inclusions is presented. The formulated methodology takes into account all the factors mentioned above and can be used to model nanocomposites with multiple inclusions that are randomly oriented or aligned in a particular direction. The effect of inclusion dispersion non-uniformity is modeled using a two-scale approach. The applications of the formulated effective medium theory are demonstrated using previously published experimental and numerical results for several particulate nanocomposites. PMID:28773817
Forced MHD turbulence in a uniform external magnetic field
NASA Technical Reports Server (NTRS)
Hossain, M.; Vahala, G.; Montgomery, D.
1985-01-01
Two-dimensional dissipative MHD turbulence is randomly driven at small spatial scales and is studied by numerical simulation in the presence of a strong uniform external magnetic field. A behavior is observed which is apparently distinct from the inverse cascade which prevails in the absence of an external magnetic field. The magnetic spectrum becomes dominated by the three longest wavelength Alfven waves in the system allowed by the boundary conditions: those which, in a box size of edge 2 pi, have wave numbers (kx' ky) = (1, 1), and (1, -1), where the external magnetic field is in the x direction. At any given instant, one of these three modes dominates the vector potential spectrum, but they do not constitute a resonantly coupled triad. Rather, they are apparently coupled by the smaller-scale turbulence.
Accretion rates of protoplanets 2: Gaussian distribution of planestesimal velocities
NASA Technical Reports Server (NTRS)
Greenzweig, Yuval; Lissauer, Jack J.
1991-01-01
The growth rate of a protoplanet embedded in a uniform surface density disk of planetesimals having a triaxial Gaussian velocity distribution was calculated. The longitudes of the aspses and nodes of the planetesimals are uniformly distributed, and the protoplanet is on a circular orbit. The accretion rate in the two body approximation is enhanced by a factor of approximately 3, compared to the case where all planetesimals have eccentricity and inclination equal to the root mean square (RMS) values of those variables in the Gaussian distribution disk. Numerical three body integrations show comparable enhancements, except when the RMS initial planetesimal eccentricities are extremely small. This enhancement in accretion rate should be incorporated by all models, analytical or numerical, which assume a single random velocity for all planetesimals, in lieu of a Gaussian distribution.
NASA Astrophysics Data System (ADS)
Mukherjee, Santanu; Schuppert, Nicholas; Bates, Alex; Jasinski, Jacek; Hong, Jong-Eun; Choi, Moon Jong; Park, Sam
2017-04-01
A novel solvoplasma based technique was used to fabricate highly uniform SnO2 nanowires (NWs) for application as an anode in sodium-ion batteries (SIBs). This technique is scalable, rapid, and utilizes a rigorous cleaning process to produce very pure SnO2 NWs with enhanced porosity; which improves sodium-ion hosting and reaction kinetics. The batch of NWs obtained from the plasma process were named the "as-made" sample and after cleaning the "pure" sample. Structural characterization showed that the as-made sample has a K+ ion impurity which is absent in the pure samples. The pure samples have a higher maximum specific capacity, 400.71 mAhg-1, and Coulombic efficiency, 85%, compared to the as-made samples which have a maximum specific capacity of 174.69 mAhg-1 and Coulombic efficiency of 74% upon cycling. A study of the electrochemical impedance spectra showed that the as-made samples have a higher interfacial and diffusion resistance than the pure samples and resistances increased after 50 cycles of cell operation for both samples due to progressive electrode degradation. Specific energy vs specific power plots were employed to analyze the performance of the system with respect to the working conditions.
Stakeholder Perceptions of Cyberbullying Cases: Application of the Uniform Definition of Bullying.
Moreno, Megan A; Suthamjariya, Nina; Selkie, Ellen
2018-04-01
The Uniform Definition of Bullying was developed to address bullying and cyberbullying, and to promote consistency in measurement and policy. The purpose of this study was to understand community stakeholder perceptions of typical cyberbullying cases, and to evaluate how these case descriptions align with the Uniform Definition. In this qualitative case analysis we recruited stakeholders commonly involved in cyberbullying. We used purposeful sampling to identify and recruit adolescents and young adults, parents, and professionals representing education and health care. Participants were asked to write a typical case of cyberbullying and descriptors in the context of a group discussion. We applied content analysis to case excerpts using inductive and deductive approaches, and chi-squared tests for mixed methods analyses. A total of 68 participants contributed; participants included 73% adults and 27% adolescents and young adults. A total of 650 excerpts were coded from participants' example cases and 362 (55.6%) were consistent with components of the Uniform Definition. The most frequently mentioned component of the Uniform Definition was Aggressive Behavior (n = 218 excerpts), whereas Repeated was mentioned infrequently (n = 19). Most participants included two to three components of the Uniform Definition within an example case; none of the example cases included all components of the Uniform Definition. We found that most participants described cyberbullying cases using few components of the Uniform Definition. Findings can be applied toward considering refinement of the Uniform Definition to ensure stakeholders find it applicable to cyberbullying. Copyright © 2017 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
The Effect of Laziness in Group Chase and Escape
NASA Astrophysics Data System (ADS)
Masuko, Makoto; Hiraoka, Takayuki; Ito, Nobuyasu; Shimada, Takashi
2017-08-01
The effect of laziness in the group chase and escape problem is studied using a simple model. Laziness is introduced as random walks in two ways: uniformly and in a "division of labor" way. It is shown that while the former is always ineffective, the latter can improve the efficiency of catching, through the formation of pincer attack configuration by diligent and lazy chasers.
Concurrent infection with sibling Trichinella species in a natural host.
Pozio, E; Bandi, C; La Rosa, G; Järvis, T; Miller, I; Kapel, C M
1995-10-01
Random amplified polymorphic DNA (RAPD) analysis of individual Trichinella muscle larvae, collected from several sylvatic and domestic animals in Estonia, revealed concurrent infection of a racoon dog with Trichinella nativa and Trichinella britovi. This finding provides strong support for their taxonomic ranking as sibling species. These 2 species appear uniformly distributed among sylvatic animals through Estonia, while Trichinella spiralis appears restricted to the domestic habitat.
Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams
Bang, W.; Albright, B. J.; Bradley, P. A.; ...
2015-09-22
With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has beenmore » unavailable to date. We have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.« less
Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams.
Bang, W; Albright, B J; Bradley, P A; Gautier, D C; Palaniyappan, S; Vold, E L; Santiago Cordoba, M A; Hamilton, C E; Fernández, J C
2015-09-22
With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has been unavailable to date. Here we have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.
Suspensions as a Valuable Alternative to Extemporaneously Compounded Capsules.
Dijkers, Eli; Nanhekhan, Valerie; Thorissen, Astrid; Polonini, Hudson
2017-01-01
The objective of this study was to determine the variation in content of 74 different active pharmaceutical ingredients (APIs) and compare it with what is known in the literature for the content uniformity of extemporaneous prepared capsules. Active pharmaceutical ingredients quantification was performed by high-performance liquid chromatography, via a stability-indicating method. Samples for all active pharmaceutical ingredients were taken throughout a 90-day period and the content was determined. In total, 5,190 different samples were analyzed for 74 different active pharmaceutical ingredients at room (15°C to 25°C) or controlled refrigerated temperature (2°C to 8°C). Each of these datasets was analyzed according to the United States Pharmacopeia Content Uniformity monograph, corrected for the sample number. The mean acceptance values were well within specifications. In addition, all suspensions complied with the criteria defined by the British Pharmacopoeia monograph for Content Uniformity of Liquid Dispersions for both room and controlled refrigerated temperature. In previous studies, it was found that a routine weight variation check is often not sufficient for quality assurance of extemporaneous prepared capsules. Compounded oral liquids show little variation in content for 74 different active pharmaceutical ingredients; therefore, compounded oral liquids are a suitable alternative when compounding individualized medications for patients. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bang, W.; Albright, B. J.; Bradley, P. A.
With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has beenmore » unavailable to date. We have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.« less
Visualization of expanding warm dense gold and diamond heated rapidly by laser-generated ion beams
NASA Astrophysics Data System (ADS)
Bang, W.; Albright, B. J.; Bradley, P. A.; Gautier, D. C.; Palaniyappan, S.; Vold, E. L.; Cordoba, M. A. Santiago; Hamilton, C. E.; Fernández, J. C.
2015-09-01
With the development of several novel heating sources, scientists can now heat a small sample isochorically above 10,000 K. Although matter at such an extreme state, known as warm dense matter, is commonly found in astrophysics (e.g., in planetary cores) as well as in high energy density physics experiments, its properties are not well understood and are difficult to predict theoretically. This is because the approximations made to describe condensed matter or high-temperature plasmas are invalid in this intermediate regime. A sufficiently large warm dense matter sample that is uniformly heated would be ideal for these studies, but has been unavailable to date. Here we have used a beam of quasi-monoenergetic aluminum ions to heat gold and diamond foils uniformly and isochorically. For the first time, we visualized directly the expanding warm dense gold and diamond with an optical streak camera. Furthermore, we present a new technique to determine the initial temperature of these heated samples from the measured expansion speeds of gold and diamond into vacuum. We anticipate the uniformly heated solid density target will allow for direct quantitative measurements of equation-of-state, conductivity, opacity, and stopping power of warm dense matter, benefiting plasma physics, astrophysics, and nuclear physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, TK
Purpose In proton beam configuration for spot scanning proton therapy (SSPT), one can define the spacing between spots and lines of scanning as a ratio of given spot size. If the spacing increases, the number of spots decreases which can potentially decrease scan time, and so can whole treatment time, and vice versa. However, if the spacing is too large, the uniformity of scanned field decreases. Also, the field uniformity can be affected by motion during SSPT beam delivery. In the present study, the interplay between spot/ line spacing and motion is investigated. Methods We used four Gaussian-shape spot sizesmore » with 0.5cm, 1.0cm, 1.5cm, and 2.0cm FWHM, three spot/line spacing that creates uniform field profile which are 1/3*FWHM, σ/3*FWHM and 2/3*FWHM, and three random motion amplitudes within, +/−0.3mm, +/−0.5mm, and +/−1.0mm. We planned with 2Gy uniform single layer of 10×10cm2 and 20×20cm2 fields. Then, mean dose within 80% area of given field size, contrubuting MU per each spot assuming 1cGy/MU calibration for all spot sizes, number of spots and uniformity were calculated. Results The plans with spot/line spacing equal to or smaller than 2/3*FWHM without motion create ∼100% uniformity. However, it was found that the uniformity decreases with increased spacing, and it is more pronounced with smaller spot sizes, but is not affected by scanned field sizes. Conclusion It was found that the motion during proton beam delivery can alter the dose uniformity and the amount of alteration changes with spot size which changes with energy and spot/line spacing. Currently, robust evaluation in TPS (e.g. Eclipse system) performs range uncertainty evaluation using isocenter shift and CT calibration error. Based on presented study, it is recommended to add interplay effect evaluation to robust evaluation process. For future study, the additional interplay between the energy layers and motion is expected to present volumetric effect.« less
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Laboratory Evaluation of Remediation Alternatives for U.S. Coast Guard Small Arms Firing Ranges
1999-11-01
S) is an immobilization process that involves the mixing of a contaminated soil with a binder material to enhance the physical and chemical...samples were shipped to WES for laboratory analysis. Phase III: Homogenization of the Bulk Samples. Each of the bulk samples was separately mixed to...produce uniform samples for testing. These mixed bulk soil samples were analyzed for metal content. Phase IV: Characterization of the Bulk Soils
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
...The Departments of the Health and Human Services, Labor, and the Treasury (the Departments) are simultaneously publishing in the Federal Register this document and proposed regulations (2011 proposed regulations) under the Patient Protection and Affordable Care Act to implement the disclosure for group health plans and health insurance issuers of the summary of benefits and coverage (SBC) and the uniform glossary. This document proposes a template for an SBC; instructions, sample language, and a guide for coverage examples calculations to be used in completing the template; and a uniform glossary that would satisfy the disclosure requirements under section 2715 of the Public Health Service (PHS) Act. Comments are invited on these materials.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Generalized Entanglement Entropies of Quantum Designs.
Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun
2018-03-30
The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.
Generalized Entanglement Entropies of Quantum Designs
NASA Astrophysics Data System (ADS)
Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun
2018-03-01
The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.
Shah, R; Worner, S P; Chapman, R B
2012-10-01
Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Identification of Homophily and Preferential Recruitment in Respondent-Driven Sampling.
Crawford, Forrest W; Aronow, Peter M; Zeng, Li; Li, Jianghong
2018-01-01
Respondent-driven sampling (RDS) is a link-tracing procedure used in epidemiologic research on hidden or hard-to-reach populations in which subjects recruit others via their social networks. Estimates from RDS studies may have poor statistical properties due to statistical dependence in sampled subjects' traits. Two distinct mechanisms account for dependence in an RDS study: homophily, the tendency for individuals to share social ties with others exhibiting similar characteristics, and preferential recruitment, in which recruiters do not recruit uniformly at random from their network alters. The different effects of network homophily and preferential recruitment in RDS studies have been a source of confusion and controversy in methodological and empirical research in epidemiology. In this work, we gave formal definitions of homophily and preferential recruitment and showed that neither is identified in typical RDS studies. We derived nonparametric identification regions for homophily and preferential recruitment and showed that these parameters were not identified unless the network took a degenerate form. The results indicated that claims of homophily or recruitment bias measured from empirical RDS studies may not be credible. We applied our identification results to a study involving both a network census and RDS on a population of injection drug users in Hartford, Connecticut (2012-2013). © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Agueros, M. A.; Fournier, A.; Street, R.; Ofek, E.; Levitan, D. B.; PTF Collaboration
2013-01-01
Many current photometric, time-domain surveys are driven by specific goals such as searches for supernovae or transiting exoplanets, or studies of stellar variability. These goals in turn set the cadence with which individual fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several such sub-surveys are being conducted in parallel, leading to extremely non-uniform sampling over the survey's nearly 20,000 sq. deg. footprint. While the typical 7.26 sq. deg. PTF field has been imaged 20 times in R-band, ~2300 sq. deg. have been observed more than 100 times. We use the existing PTF data 6.4x107 light curves) to study the trade-off that occurs when searching for microlensing events when one has access to a large survey footprint with irregular sampling. To examine the probability that microlensing events can be recovered in these data, we also test previous statistics used on uniformly sampled data to identify variables and transients. We find that one such statistic, the von Neumann ratio, performs best for identifying simulated microlensing events. We develop a selection method using this statistic and apply it to data from all PTF fields with >100 observations to uncover a number of interesting candidate events. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large datasets, both of which will be useful to future wide-field, time-domain surveys such as the LSST.
The x ray properties of a large, uniform QSO sample: Einstein observations of the LBQS
NASA Technical Reports Server (NTRS)
Margon, B.; Anderson, S. F.; Xu, X.; Green, P. J.; Foltz, C. B.
1992-01-01
Although there are large numbers of Quasi Stellar Objects (QSO's) now observed in X rays, extensive X-ray observations of uniformly selected, 'complete' QSO samples are more rare. The Large Bright QSO Survey (LBQS) consists of about 1000 objects with well understood properties, most brighter than B = 18.8 and thus amenable to X-ray detections in relatively brief exposures. The sample is thought to be highly complete in the range 0.2 less than z less than 3.3, a significantly broader interval than many other surveys. The Einstein IPC observed 150 of these objects, mostly serendipitously, during its lifetime. We report the results of an analysis of these IPC data, considering not only the 20 percent of the objects we find to have positive X-ray detections, but also the ensemble X-ray properties derived by 'image stacking'.
RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation
D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen
1980-01-01
Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
30 CFR 7.47 - Deflection temperature test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ±3.6 °F (23 ±2 °C) and 50 ±5% relative humidity for at least 40 hours. (2) Place a sample on supports... sample at the point of loading as the temperature of the medium is increased at a uniform rate of 3.6...
30 CFR 7.47 - Deflection temperature test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ±3.6 °F (23 ±2 °C) and 50 ±5% relative humidity for at least 40 hours. (2) Place a sample on supports... sample at the point of loading as the temperature of the medium is increased at a uniform rate of 3.6...
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Huzzey, J M; Fregonesi, J A; von Keyserlingk, M A G; Weary, D M
2013-01-01
Factors affecting sampling behavior of cattle are poorly understood. The objectives of this study were to measure the effects of variation in feed quality on the feeding behavior of Holstein dairy heifers. Thirty-two heifers were housed in 4 groups of 8. Each group pen had 8 distinct feeding stations. The total mixed ration (TMR) provided was low energy (TMR-L), moderate energy (TMR-M), or high energy (TMR-H). During trial 1 (d 1 to 8), heifers were offered a uniform baseline diet (TMR-M in all 8 feeding stations) interspaced with 2 uniform test diets on d 3 and 6 (TMR-L or TMR-H in all 8 feeding stations). During trial 2 (d 9 to 17) heifers were offered a nonuniform baseline diet (7 feeding stations with TMR-L and 1 feeding station with TMR-H) interspaced with 3 uniform test diets on d 11, 14, and 17 (TMR-L, TMR-M, or TMR-H in all 8 feeding stations). Heifers were observed in pairs (n=16) for 15 min following delivery of fresh feed. Relative to the uniform baseline period of trial 1, 31% fewer switches occurred between feeding stations when offered TMR-H and 51% more switches when offered TMR-L. Relative to the nonuniform baseline of trial 2, 49% fewer, 27% fewer, and 25% more switches occurred during the TMR-H, TMR-M, and TMR-L treatments, respectively. In general, when heifers were offered a diet that was lower in energy density than that previously experienced, they spent less time at each feeding station and when offered a higher energy diet, heifers spent more time at each feeding station. The greater the contrast in energy density between the test and baseline diets, the greater the change in the behavioral response. Competitive interactions at the feed bunk were most frequent when TMR quality varied among the 8 feeding stations; during the nonuniform baseline period of trial 2, the number of competitive interactions was over 3.5 times higher than during all uniform dietary treatments. In summary, dairy heifers sample feed quality by changing feeding locations at the feed bunk and this sampling behavior is affected by variation in diet quality along the feed bunk and across days. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.