Are shear force methods adequately reported?
Holman, Benjamin W B; Fowler, Stephanie M; Hopkins, David L
2016-09-01
This study aimed to determine the detail to which shear force (SF) protocols and methods have been reported in the scientific literature between 2009 and 2015. Articles (n=734) published in peer-reviewed animal and food science journals and limited to only those testing the SF of unprocessed and non-fabricated mammal meats were evaluated. It was found that most of these SF articles originated in Europe (35.3%), investigated bovine species (49.0%), measured m. longissimus samples (55.2%), used tenderometers manufactured by Instron (31.2%), and equipped with Warner-Bratzler blades (68.8%). SF samples were also predominantly thawed prior to cooking (37.1%) and cooked sous vide, using a water bath (50.5%). Information pertaining to blade crosshead speed (47.5%), recorded SF resistance (56.7%), muscle fibre orientation when tested (49.2%), sub-section or core dimension (21.8%), end-point temperature (29.3%), and other factors contributing to SF variation were often omitted. This base failure diminishes repeatability and accurate SF interpretation, and must therefore be rectified. PMID:27107727
AREA OVERLAP METHOD FOR DETERMINING ADEQUATE CHROMATOGRAPHIC RESOLUTION
The Area Overlap method for evaluating analytical chromatograms is evaluated and compared with the Depth-of-the-Valley, IUPAC and Purnell criteria. The method is a resolution criterion based on the fraction of area contributed by an adjacent, overlapping peak. It accounts for bot...
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
A method for determining adequate resistance form of complete cast crown preparations.
Weed, R M; Baez, R J
1984-09-01
A diagram with various degrees of occlusal convergence, which takes into consideration the length and diameter of complete crown preparations, was designed as a guide to assist the dentist to obtain adequate resistance form. To test the validity of the diagram, five groups of complete cast crown stainless steel dies were prepared (3.5 mm long, occlusal convergence 10, 13, 16, 19, and 22 degrees). Gold copings were cast for each of the 50 preparations. Displacement force was applied to the casting perpendicularly to a simulated 30-degree cuspal incline until the casting was displaced. Castings were deformed at margins except for the 22-degree group. Castings from this group were displaced without deformation, and it was concluded that there was a lack of adequate resistance form as predicted by the diagram. The hypothesis that the diagram could be used to predict adequate or inadequate resistance form was confirmed by this study. PMID:6384470
ERIC Educational Resources Information Center
Smith, Leigh K.; Gess-Newsome, Julie
2004-01-01
Despite the apparent lack of universally accepted goals or objectives for elementary science methods courses, teacher educators nationally are autonomously designing these classes to prepare prospective teachers to teach science. It is unclear, however, whether science methods courses are preparing teachers to teach science effectively or to…
Are adequate methods available to detect protist parasites on fresh produce?
Technology Transfer Automated Retrieval System (TEKTRAN)
Human parasitic protists such as Cryptosporidium, Giardia and microsporidia contaminate a variety of fresh produce worldwide. Existing detection methods lack sensitivity and specificity for most foodborne parasites. Furthermore, detection has been problematic because these parasites adhere tenacious...
NASA Astrophysics Data System (ADS)
Bieg, Bohdan; Chrzanowski, Janusz; Kravtsov, Yury A.; Orsitto, Francesco
Basic principles and recent findings of quasi-isotropic approximation (QIA) of a geometrical optics method are presented in a compact manner. QIA was developed in 1969 to describe electromagnetic waves in weakly anisotropic media. QIA represents the wave field as a power series in two small parameters, one of which is a traditional geometrical optics parameter, equal to wavelength ratio to plasma characteristic scale, and the other one is the largest component of anisotropy tensor. As a result, "" QIA ideally suits to tokamak polarimetry/interferometry systems in submillimeter range, where plasma manifests properties of weakly anisotropic medium.
Random Walk Method for Potential Problems
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Raju, I. S.
2002-01-01
A local Random Walk Method (RWM) for potential problems governed by Lapalace's and Paragon's equations is developed for two- and three-dimensional problems. The RWM is implemented and demonstrated in a multiprocessor parallel environment on a Beowulf cluster of computers. A speed gain of 16 is achieved as the number of processors is increased from 1 to 23.
Individual Differences Methods for Randomized Experiments
Tucker-Drob, Elliot M.
2011-01-01
Experiments allow researchers to randomly vary the key manipulation, the instruments of measurement, and the sequences of the measurements and manipulations across participants. To date, however, the advantages of randomized experiments to manipulate both the aspects of interest and the aspects that threaten internal validity have been primarily used to make inferences about the average causal effect of the experimental manipulation. This paper introduces a general framework for analyzing experimental data in order to make inferences about individual differences in causal effects. Approaches to analyzing the data produced by a number of classical designs, and two more novel designs, are discussed. Simulations highlight the strengths and weaknesses of the data produced by each design with respect to internal validity. Results indicate that, although the data produced by standard designs can be used to produce accurate estimates of average causal effects of experimental manipulations, more elaborate designs are often necessary for accurate inferences with respect to individual differences in causal effects. The methods described here can be diversely applied by researchers interested in determining the extent to which individuals respond differentially to an experimental manipulation or treatment, and how differential responsiveness relates to individual participant characteristics. PMID:21744970
Valente, Marta Sofia; Pedro, Paulo; Alonso, M Carmen; Borrego, Juan J; Dionísio, Lídia
2010-03-01
Monitoring the microbiological quality of water used for recreational activities is very important to human public health. Although the sanitary quality of recreational marine waters could be evaluated by standard methods, they are time-consuming and need confirmation. For these reasons, faster and more sensitive methods, such as the defined substrate-based technology, have been developed. In the present work, we have compared the standard method of membrane filtration using Tergitol-TTC agar for total coliforms and Escherichia coli, and Slanetz and Bartley agar for enterococci, and the IDEXX defined substrate technology for these faecal pollution indicators to determine the microbiological quality of natural recreational waters. ISO 17994:2004 standard was used to compare these methods. The IDEXX for total coliforms and E. coli, Colilert, showed higher values than those obtained by the standard method. Enterolert test, for the enumeration of enterococci, showed lower values when compared with the standard method. It may be concluded that more studies to evaluate the precision and accuracy of the rapid tests are required in order to apply them for routine monitoring of marine and freshwater recreational bathing areas. The main advantages of these methods are that they are more specific, feasible and simpler than the standard methodology. PMID:20009243
Individual Differences Methods for Randomized Experiments
ERIC Educational Resources Information Center
Tucker-Drob, Elliot M.
2011-01-01
Experiments allow researchers to randomly vary the key manipulation, the instruments of measurement, and the sequences of the measurements and manipulations across participants. To date, however, the advantages of randomized experiments to manipulate both the aspects of interest and the aspects that threaten internal validity have been primarily…
Convergence of a random walk method for the Burgers equation
Roberts, S.
1985-10-01
In this paper we consider a random walk algorithm for the solution of Burgers' equation. The algorithm uses the method of fractional steps. The non-linear advection term of the equation is solved by advecting ''fluid'' particles in a velocity field induced by the particles. The diffusion term of the equation is approximated by adding an appropriate random perturbation to the positions of the particles. Though the algorithm is inefficient as a method for solving Burgers' equation, it does model a similar method, the random vortex method, which has been used extensively to solve the incompressible Navier-Stokes equations. The purpose of this paper is to demonstrate the strong convergence of our random walk method and so provide a model for the proof of convergence for more complex random walk algorithms; for instance, the random vortex method without boundaries.
Effect of packing method on the randomness of disc packings
NASA Astrophysics Data System (ADS)
Zhang, Z. P.; Yu, A. B.; Oakeshott, R. B. S.
1996-06-01
The randomness of disc packings, generated by random sequential adsorption (RSA), random packing under gravity (RPG) and Mason packing (MP) which gives a packing density close to that of the RSA packing, has been analysed, based on the Delaunay tessellation, and is evaluated at two levels, i.e. the randomness at individual subunit level which relates to the construction of a triangle from a given edge length distribution and the randomness at network level which relates to the connection between triangles from a given triangle frequency distribution. The Delaunay tessellation itself is also analysed and its almost perfect randomness at the two levels is demonstrated, which verifies the proposed approach and provides a random reference system for the present analysis. It is found that (i) the construction of a triangle subunit is not random for the RSA, MP and RPG packings, with the degree of randomness decreasing from the RSA to MP and then to RPG packing; (ii) the connection of triangular subunits in the network is almost perfectly random for the RSA packing, acceptable for the MP packing and not good for the RPG packing. Packing method is an important factor governing the randomness of disc packings.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Tabu search method with random moves for globally optimal design
NASA Astrophysics Data System (ADS)
Hu, Nanfang
1992-09-01
Optimum engineering design problems are usually formulated as non-convex optimization problems of continuous variables. Because of the absence of convexity structure, they can have multiple minima, and global optimization becomes difficult. Traditional methods of optimization, such as penalty methods, can often be trapped at a local optimum. The tabu search method with random moves to solve approximately these problems is introduced. Its reliability and efficiency are examined with the help of standard test functions. By the analysis of the implementations, it is seen that this method is easy to use, and no derivative information is necessary. It outperforms the random search method and composite genetic algorithm. In particular, it is applied to minimum weight design examples of a three-bar truss, coil springs, a Z-section and a channel section. For the channel section, the optimal design using the tabu search method with random moves saved 26.14 percent over the weight of the SUMT method.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Random errors in interferometry with the least-squares method
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships have also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.
NASA Technical Reports Server (NTRS)
Parrott, T. L.; Smith, C. D.
1977-01-01
The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.
Buiten, Maurits S; van der Heijden, Aafke C; Schalij, Martin J; van Erven, Lieselot
2015-05-01
Currently several extraction tools are available in order to allow safe and successful transvenous lead extraction (TLE) of pacemaker and ICD leads; however, no directives exist to guide physicians in their choice of extraction tools and approaches. To aim of the current review is to provide an overview of the success and complication rates of different extraction methods and tools available. A comprehensive search of all published literature was conducted in the databases of PubMed, Embase, Web of Science, and Central. Included papers were original articles describing a specific method of TLE and the corresponding success rates of at least 50 patients. Fifty-three studies were included; the majority (56%) utilized 2 (1-4) different venous extraction approaches (subclavian and femoral), the median number of extraction tools used was 3 (1-6). A stepwise approach was utilized in the majority of the studies, starting with simple traction which resulted in successful TLE in 7-85% of the leads. When applicable the procedure was continued with non-powered tools resulting in a successful extraction of 34-87% leads. Subsequently, powered tools were applied whereby success rates further increased to 74-100%. The final step in TLE was usually utilized by femoral snare leading to an overall TLE success rate of 96-100%. The median procedure-related mortality and major complication described were, respectively, 0% (0-3%) and 1% (0-7%) per patient. In conclusion, a stepwise extraction approach can result in a clinical successful TLE in up to 100% of the leads with a relatively low risk of procedure-related mortality and complications. PMID:25687745
Randomized methods in lossless compression of hyperspectral data
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Pauca, V. Paúl; Plemmons, Robert
2013-01-01
We evaluate recently developed randomized matrix decomposition methods for fast lossless compression and reconstruction of hyperspectral imaging (HSI) data. The simple random projection methods have been shown to be effective for lossy compression without severely affecting the performance of object identification and classification. We build upon these methods to develop a new double-random projection method that may enable security in data transmission of compressed data. For HSI data, the distribution of elements in the resulting residual matrix, i.e., the original data subtracted by its low-rank representation, exhibits a low entropy relative to the original data that favors high-compression ratio. We show both theoretically and empirically that randomized methods combined with residual-coding algorithms can lead to effective lossless compression of HSI data. We conduct numerical tests on real large-scale HSI data that shows promise in this case. In addition, we show that randomized techniques can be applicable for encoding on resource-constrained on-board sensor systems, where the core matrix-vector multiplications can be easily implemented on computing platforms such as graphic processing units or field-programmable gate arrays.
A random spatial sampling method in a rural developing nation
2014-01-01
Background Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. Methods We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. Results This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Conclusions Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available. PMID:24716473
Accelerated Mini-batch Randomized Block Coordinate Descent Method
Zhao, Tuo; Yu, Mo; Wang, Yiming; Arora, Raman; Liu, Han
2014-01-01
We consider regularized empirical risk minimization problems. In particular, we minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each iteration. Thus they need all data to be accessible so that the partial gradient of the block gradient can be exactly obtained. However, such a “batch” setting may be computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial gradient of the selected block based on a mini-batch of randomly sampled data in each iteration. We further accelerate the MRBCD method by exploiting the semi-stochastic optimization scheme, which effectively reduces the variance of the partial gradient estimators. Theoretically, we show that for strongly convex functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method naturally exploits the sparsity structure and achieves better computational performance than existing methods. PMID:25620860
Methods for analyzing cost effectiveness data from cluster randomized trials
Bachmann, Max O; Fairall, Lara; Clark, Allan; Mugford, Miranda
2007-01-01
Background Measurement of individuals' costs and outcomes in randomized trials allows uncertainty about cost effectiveness to be quantified. Uncertainty is expressed as probabilities that an intervention is cost effective, and confidence intervals of incremental cost effectiveness ratios. Randomizing clusters instead of individuals tends to increase uncertainty but such data are often analysed incorrectly in published studies. Methods We used data from a cluster randomized trial to demonstrate five appropriate analytic methods: 1) joint modeling of costs and effects with two-stage non-parametric bootstrap sampling of clusters then individuals, 2) joint modeling of costs and effects with Bayesian hierarchical models and 3) linear regression of net benefits at different willingness to pay levels using a) least squares regression with Huber-White robust adjustment of errors, b) a least squares hierarchical model and c) a Bayesian hierarchical model. Results All five methods produced similar results, with greater uncertainty than if cluster randomization was not accounted for. Conclusion Cost effectiveness analyses alongside cluster randomized trials need to account for study design. Several theoretically coherent methods can be implemented with common statistical software. PMID:17822546
Multi-Agent Methods for the Configuration of Random Nanocomputers
NASA Technical Reports Server (NTRS)
Lawson, John W.
2004-01-01
As computational devices continue to shrink, the cost of manufacturing such devices is expected to grow exponentially. One alternative to the costly, detailed design and assembly of conventional computers is to place the nano-electronic components randomly on a chip. The price for such a trivial assembly process is that the resulting chip would not be programmable by conventional means. In this work, we show that such random nanocomputers can be adaptively programmed using multi-agent methods. This is accomplished through the optimization of an associated high dimensional error function. By representing each of the independent variables as a reinforcement learning agent, we are able to achieve convergence must faster than with other methods, including simulated annealing. Standard combinational logic circuits such as adders and multipliers are implemented in a straightforward manner. In addition, we show that the intrinsic flexibility of these adaptive methods allows the random computers to be reconfigured easily, making them reusable. Recovery from faults is also demonstrated.
Pseudo Random Classification of Circulation Patterns - Comparison to Deliberate Methods
NASA Astrophysics Data System (ADS)
Philipp, Andreas
2010-05-01
Classification of circulations patterns, e.g. of sea level pressure patterns, can be done by many different methods, e.g. by cluster analysis, methods based on eigenvalues or those based on the leader algorithm like the Lund classification. However none of these methods can give clear advice on the problem of appropriate numbers of classes and even though the number is decided different methods lead to different results. High efforts are made to find methods leading to indisbutable results. However, doubts on the classifiability of tropospheric circulation states have been raised recently and the existence of natural groups of similar patterns within the circulation data, which might be caused by circulation regimes, are questionable. If those groups or clusters exist, methods which are designed to find them, in particular cluster analysis, should be superior to classification schemes based on pseudo random definition of classes. In order to prove this assumption, a classification method called "random centroids" has been designed, for each class choosing one single circulation pattern using a random number generator and assigning all remaining patterns to them according to the minimum Euclidean distance. Evaluation metrics like the "explained cluster variance" for pressure, temperature and precipitation are calculated in order to compare those pseudo random classifications to classifications provided by the cost733cat dataset including many different classification catalogs for various methods (COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions"). By running the randomcent method 1000 times the empirical probability density function of the evaluation metrics can be established and provides information about the probability for the established deliberate methods to be better than random classifications. The results show that most of the classifications fail to succeed the 95th percentile of the empirical probability
Efficient stochastic Galerkin methods for random diffusion equations
Xiu Dongbin Shen Jie
2009-02-01
We discuss in this paper efficient solvers for stochastic diffusion equations in random media. We employ generalized polynomial chaos (gPC) expansion to express the solution in a convergent series and obtain a set of deterministic equations for the expansion coefficients by Galerkin projection. Although the resulting system of diffusion equations are coupled, we show that one can construct fast numerical methods to solve them in a decoupled fashion. The methods are based on separation of the diagonal terms and off-diagonal terms in the matrix of the Galerkin system. We examine properties of this matrix and show that the proposed method is unconditionally stable for unsteady problems and convergent for steady problems with a convergent rate independent of discretization parameters. Numerical examples are provided, for both steady and unsteady random diffusions, to support the analysis.
Krusche, Adele; Rudolf von Rohr, Isabelle; Muse, Kate; Duggan, Danielle; Crane, Catherine; Williams, J. Mark G.
2014-01-01
Background Randomized controlled trials (RCTs) are widely accepted as being the most efficient way of investigating the efficacy of psychological therapies. However, researchers conducting RCTs commonly report difficulties recruiting an adequate sample within planned timescales. In an effort to overcome recruitment difficulties, researchers often are forced to expand their recruitment criteria or extend the recruitment phase, thus increasing costs and delaying publication of results. Research investigating the effectiveness of recruitment strategies is limited and trials often fail to report sufficient details about the recruitment sources and resources utilised. Purpose We examined the efficacy of strategies implemented during the Staying Well after Depression RCT in Oxford to recruit participants with a history of recurrent depression. Methods We describe eight recruitment methods utilised and two further sources not initiated by the research team and examine their efficacy in terms of (i) the return, including the number of potential participants who contacted the trial and the number who were randomized into the trial, (ii) cost-effectiveness, comprising direct financial cost and manpower for initial contacts and randomized participants, and (iii) comparison of sociodemographic characteristics of individuals recruited from different sources. Results Poster advertising, web-based advertising and mental health worker referrals were the cheapest methods per randomized participant; however, the ratio of randomized participants to initial contacts differed markedly per source. Advertising online, via posters and on a local radio station were the most cost-effective recruitment methods for soliciting participants who subsequently were randomized into the trial. Advertising across many sources (saturation) was found to be important. Limitations It may not be feasible to employ all the recruitment methods used in this trial to obtain participation from other
Elongation method for electronic structure calculations of random DNA sequences.
Orimoto, Yuuichi; Liu, Kai; Aoki, Yuriko
2015-10-30
We applied ab initio order-N elongation (ELG) method to calculate electronic structures of various deoxyribonucleic acid (DNA) models. We aim to test potential application of the method for building a database of DNA electronic structures. The ELG method mimics polymerization reactions on a computer and meets the requirements for linear scaling computational efficiency and high accuracy, even for huge systems. As a benchmark test, we applied the method for calculations of various types of random sequenced A- and B-type DNA models with and without counterions. In each case, the ELG method maintained high accuracy with small errors in energy on the order of 10(-8) hartree/atom compared with conventional calculations. We demonstrate that the ELG method can provide valuable information such as stabilization energies and local densities of states for each DNA sequence. In addition, we discuss the "restarting" feature of the ELG method for constructing a database that exhaustively covers DNA species. PMID:26337429
NASA Astrophysics Data System (ADS)
Maziero, Jonas
2015-12-01
The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed.
Random-breakage mapping method applied to human DNA sequences
NASA Technical Reports Server (NTRS)
Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)
1996-01-01
The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.
Theory of optimum radio reception methods in random noise
NASA Astrophysics Data System (ADS)
Gutkin, L. S.
1982-09-01
The theory of optimum methods of reception of signals on the background of random noise, widely used in development of any radioelectronic systems and devices based on reception and transmission of information (radar and radio controlled, radio communications, radio telemetry, radio astronomy, television, and other systems), as well as electroacoustical and wire communications sytems, is presented. Optimum linear and nonlinear filtration, binary and comples signal detection and discrimination, estimation of signal parameters, receiver synthesis for incomplete a priori data, special features of synthesis with respect to certain quality indicators, and other problems are examined.
Finite amplitude method for the quasiparticle random-phase approximation
Avogadro, Paolo; Nakatsukasa, Takashi
2011-07-15
We present the finite amplitude method (FAM), originally proposed in Ref. [17], for superfluid systems. A Hartree-Fock-Bogoliubov code may be transformed into a code of the quasiparticle-random-phase approximation (QRPA) with simple modifications. This technique has advantages over the conventional QRPA calculations, such as coding feasibility and computational cost. We perform the fully self-consistent linear-response calculation for the spherical neutron-rich nucleus {sup 174}Sn, modifying the hfbrad code, to demonstrate the accuracy, feasibility, and usefulness of the FAM.
Searching method through biased random walks on complex networks.
Lee, Sungmin; Yook, Soon-Hyung; Kim, Yup
2009-07-01
Information search is closely related to the first-passage property of diffusing particle. The physical properties of diffusing particle is affected by the topological structure of the underlying network. Thus, the interplay between dynamical process and network topology is important to study information search on complex networks. Designing an efficient method has been one of main interests in information search. Both reducing the network traffic and decreasing the searching time have been two essential factors for designing efficient method. Here we propose an efficient method based on biased random walks. Numerical simulations show that the average searching time of the suggested model is more efficient than other well-known models. For a practical interest, we demonstrate how the suggested model can be applied to the peer-to-peer system. PMID:19658839
Random projection and SVD methods in hyperspectral imaging
NASA Astrophysics Data System (ADS)
Zhang, Jiani
Hyperspectral imaging provides researchers with abundant information with which to study the characteristics of objects in a scene. Processing the massive hyperspectral imagery datasets in a way that efficiently provides useful information becomes an important issue. In this thesis, we consider methods which reduce the dimension of hyperspectral data while retaining as much useful information as possible. Traditional deterministic methods for low-rank approximation are not always adaptable to process huge datasets in an effective way, and therefore probabilistic methods are useful in dimension reduction of hyperspectral images. In this thesis, we begin by generally introducing the background and motivations of this work. Next, we summarize the preliminary knowledge and the applications of SVD and PCA. After these descriptions, we present a probabilistic method, randomized Singular Value Decomposition (rSVD), for the purposes of dimension reduction, compression, reconstruction, and classification of hyperspectral data. We discuss some variations of this method. These variations offer the opportunity to obtain a more accurate reconstruction of the matrix whose singular values decay gradually, to process matrices without target rank, and to obtain the rSVD with only one single pass over the original data. Moreover, we compare the method with Compressive-Projection Principle Component Analysis (CPPCA). From the numerical results, we can see that rSVD has better performance in compression and reconstruction than truncated SVD and CPPCA. We also apply rSVD to classification methods for the hyperspectral data provided by the National Geospatial-Intelligence Agency (NGA).
A random walk method for computing genetic location scores.
Lange, K; Sobel, E
1991-01-01
Calculation of location scores is one of the most computationally intensive tasks in modern genetics. Since these scores are crucial in placing disease loci on marker maps, there is ample incentive to pursue such calculations with large numbers of markers. However, in contrast to the simple, standardized pedigrees used in making marker maps, disease pedigrees are often graphically complex and sparsely phenotyped. These complications can present insuperable barriers to exact likelihood calculations with more than a few markers simultaneously. To overcome these barriers we introduce in the present paper a random walk method for computing approximate location scores with large numbers of biallelic markers. Sufficient mathematical theory is developed to explain the method. Feasibility is checked by small-scale simulations for two applications permitting exact calculation of location scores. PMID:1746559
PROSPECTIVE RANDOMIZED STUDY COMPARING TWO ANESTHETIC METHODS FOR SHOULDER SURGERY
Ikemoto, Roberto Yukio; Murachovsky, Joel; Prata Nascimento, Luis Gustavo; Bueno, Rogerio Serpone; Oliveira Almeida, Luiz Henrique; Strose, Eric; de Mello, Sérgio Cabral; Saletti, Deise
2015-01-01
Objective: To evaluate the efficacy of suprascapular nerve block in combination with infusion of anesthetic into the subacromial space, compared with interscalene block. Methods: Forty-five patients with small or medium-sized isolated supraspinatus tendon lesions who underwent arthroscopic repair were prospectively and comparatively evaluated through random assignation to three groups of 15, each with a different combination of anesthetic methods. The efficacy of postoperative analgesia was measured using the visual analogue scale for pain and the analgesic, anti-inflammatory and opioid drug consumption. Inhalation anesthetic consumption during surgery was also compared between the groups. Results: The statistical analysis did not find any statistically significant differences among the groups regarding anesthetic consumption during surgery or postoperative analgesic efficacy during the first 48 hours. Conclusion: Suprascapular nerve block with infusion of anesthetic into the subacromial space is an excellent alternative to interscalene block, particularly in hospitals in which an electrical nerve stimulating device is unavailable. PMID:27022569
Sequential methods for random-effects meta-analysis
Higgins, Julian P T; Whitehead, Anne; Simmonds, Mark
2011-01-01
Although meta-analyses are typically viewed as retrospective activities, they are increasingly being applied prospectively to provide up-to-date evidence on specific research questions. When meta-analyses are updated account should be taken of the possibility of false-positive findings due to repeated significance tests. We discuss the use of sequential methods for meta-analyses that incorporate random effects to allow for heterogeneity across studies. We propose a method that uses an approximate semi-Bayes procedure to update evidence on the among-study variance, starting with an informative prior distribution that might be based on findings from previous meta-analyses. We compare our methods with other approaches, including the traditional method of cumulative meta-analysis, in a simulation study and observe that it has Type I and Type II error rates close to the nominal level. We illustrate the method using an example in the treatment of bleeding peptic ulcers. Copyright © 2010 John Wiley & Sons, Ltd. PMID:21472757
Asbestos/NESHAP adequately wet guidance
Shafer, R.; Throwe, S.; Salgado, O.; Garlow, C.; Hoerath, E.
1990-12-01
The Asbestos NESHAP requires facility owners and/or operators involved in demolition and renovation activities to control emissions of particulate asbestos to the outside air because no safe concentration of airborne asbestos has ever been established. The primary method used to control asbestos emissions is to adequately wet the Asbestos Containing Material (ACM) with a wetting agent prior to, during and after demolition/renovation activities. The purpose of the document is to provide guidance to asbestos inspectors and the regulated community on how to determine if friable ACM is adequately wet as required by the Asbestos NESHAP.
A new method for direction finding based on Markov random field model
NASA Astrophysics Data System (ADS)
Ota, Mamoru; Kasahara, Yoshiya; Goto, Yoshitaka
2015-07-01
Investigating the characteristics of plasma waves observed by scientific satellites in the Earth's plasmasphere/magnetosphere is effective for understanding the mechanisms for generating waves and the plasma environment that influences wave generation and propagation. In particular, finding the propagation directions of waves is important for understanding mechanisms of VLF/ELF waves. To find these directions, the wave distribution function (WDF) method has been proposed. This method is based on the idea that observed signals consist of a number of elementary plane waves that define wave energy density distribution. However, the resulting equations constitute an ill-posed problem in which a solution is not determined uniquely; hence, an adequate model must be assumed for a solution. Although many models have been proposed, we have to select the most optimum model for the given situation because each model has its own advantages and disadvantages. In the present study, we propose a new method for direction finding of the plasma waves measured by plasma wave receivers. Our method is based on the assumption that the WDF can be represented by a Markov random field model with inference of model parameters performed using a variational Bayesian learning algorithm. Using computer-generated spectral matrices, we evaluated the performance of the model and compared the results with those obtained from two conventional methods.
Yoga for veterans with chronic low back pain: Design and methods of a randomized clinical trial.
Groessl, Erik J; Schmalzl, Laura; Maiya, Meghan; Liu, Lin; Goodman, Debora; Chang, Douglas G; Wetherell, Julie L; Bormann, Jill E; Atkinson, J Hamp; Baxi, Sunita
2016-05-01
Chronic low back pain (CLBP) afflicts millions of people worldwide, with particularly high prevalence in military veterans. Many treatment options exist for CLBP, but most have limited effectiveness and some have significant side effects. In general populations with CLBP, yoga has been shown to improve health outcomes with few side effects. However, yoga has not been adequately studied in military veteran populations. In the current paper we will describe the design and methods of a randomized clinical trial aimed at examining whether yoga can effectively reduce disability and pain in US military veterans with CLBP. A total of 144 US military veterans with CLBP will be randomized to either yoga or a delayed treatment comparison group. The yoga intervention will consist of 2× weekly yoga classes for 12weeks, complemented by regular home practice guided by a manual. The delayed treatment group will receive the same intervention after six months. The primary outcome is the change in back pain-related disability measured with the Roland-Morris Disability Questionnaire at baseline and 12-weeks. Secondary outcomes include pain intensity, pain interference, depression, anxiety, fatigue/energy, quality of life, self-efficacy, sleep quality, and medication usage. Additional process and/or mediational factors will be measured to examine dose response and effect mechanisms. Assessments will be conducted at baseline, 6-weeks, 12-weeks, and 6-months. All randomized participants will be included in intention-to-treat analyses. Study results will provide much needed evidence on the feasibility and effectiveness of yoga as a therapeutic modality for the treatment of CLBP in US military veterans. PMID:27103548
Multilevel Analysis Methods for Partially Nested Cluster Randomized Trials
ERIC Educational Resources Information Center
Sanders, Elizabeth A.
2011-01-01
This paper explores multilevel modeling approaches for 2-group randomized experiments in which a treatment condition involving clusters of individuals is compared to a control condition involving only ungrouped individuals, otherwise known as partially nested cluster randomized designs (PNCRTs). Strategies for comparing groups from a PNCRT in the…
Random particle methods applied to broadband fan interaction noise
NASA Astrophysics Data System (ADS)
Dieste, M.; Gabard, G.
2012-10-01
Predicting broadband fan noise is key to reduce noise emissions from aircraft and wind turbines. Complete CFD simulations of broadband fan noise generation remain too expensive to be used routinely for engineering design. A more efficient approach consists in synthesizing a turbulent velocity field that captures the main features of the exact solution. This synthetic turbulence is then used in a noise source model. This paper concentrates on predicting broadband fan noise interaction (also called leading edge noise) and demonstrates that a random particle mesh method (RPM) is well suited for simulating this source mechanism. The linearized Euler equations are used to describe sound generation and propagation. In this work, the definition of the filter kernel is generalized to include non-Gaussian filters that can directly follow more realistic energy spectra such as the ones developed by Liepmann and von Kármán. The velocity correlation and energy spectrum of the turbulence are found to be well captured by the RPM. The acoustic predictions are successfully validated against Amiet's analytical solution for a flat plate in a turbulent stream. A standard Langevin equation is used to model temporal decorrelation, but the presence of numerical issues leads to the introduction and validation of a second-order Langevin model.
A comparison of methods for representing sparsely sampled random quantities.
Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua
2013-09-01
This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
Investigation of stochastic radiation transport methods in random heterogeneous mixtures
NASA Astrophysics Data System (ADS)
Reinert, Dustin Ray
Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing
A likelihood reformulation method in non-normal random effects models.
Liu, Lei; Yu, Zhangsheng
2008-07-20
In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method. PMID:18038445
21 CFR 1404.900 - Adequate evidence.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Adequate evidence. 1404.900 Section 1404.900 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 1404.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a particular...
29 CFR 98.900 - Adequate evidence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true Adequate evidence. 98.900 Section 98.900 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a...
Verde, Pablo E; Ohmann, Christian
2015-03-01
Researchers may have multiple motivations for combining disparate pieces of evidence in a meta-analysis, such as generalizing experimental results or increasing the power to detect an effect that a single study is not able to detect. However, while in meta-analysis, the main question may be simple, the structure of evidence available to answer it may be complex. As a consequence, combining disparate pieces of evidence becomes a challenge. In this review, we cover statistical methods that have been used for the evidence-synthesis of different study types with the same outcome and similar interventions. For the methodological review, a literature retrieval in the area of generalized evidence-synthesis was performed, and publications were identified, assessed, grouped and classified. Furthermore real applications of these methods in medicine were identified and described. For these approaches, 39 real clinical applications could be identified. A new classification of methods is provided, which takes into account: the inferential approach, the bias modeling, the hierarchical structure, and the use of graphical modeling. We conclude with a discussion of pros and cons of our approach and give some practical advice. PMID:26035469
A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials
ERIC Educational Resources Information Center
Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn
2006-01-01
A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
Zhou Tao; Tang Tao
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
Yang, Ke; Jalan, Amrit; Green, William H.; Truhlar, Donald G.
2013-01-08
We examine the accuracy of single-reference and multireference correlated wave function methods for predicting accurate energies and potential energy curves of biradicals. The biradicals considered are intermediate species along the bond dissociation coordinates for breaking the F-F bond in F_{2}, the O-O bond in H_{2}O_{2}, and the C-C bond in CH_{3}CH_{3}. We apply a host of single-reference and multireference approximations in a consistent way to the same cases to provide a better assessment of their relative accuracies than was previously possible. The most accurate method studied is coupled cluster theory with all connected excitations through quadruples, CCSDTQ. Without explicit quadruple excitations, the most accurate potential energy curves are obtained by the single-reference RCCSDt method, followed, in order of decreasing accuracy, by UCCSDT, RCCSDT, UCCSDt, seven multireference methods, including perturbation theory, configuration interaction, and coupled-cluster methods (with MRCI+Q being the best and Mk-MR-CCSD the least accurate), four CCSD(T) methods, and then CCSD.
34 CFR 85.900 - Adequate evidence.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Definitions § 85.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a particular act or omission has occurred. Authority: E.O. 12549 (3 CFR, 1986 Comp., p. 189); E.O 12689 (3 CFR, 1989 Comp., p. 235); 20 U.S.C. 1082, 1094, 1221e-3 and 3474; and Sec....
29 CFR 452.110 - Adequate safeguards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 2 2010-07-01 2010-07-01 false Adequate safeguards. 452.110 Section 452.110 Labor... DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.110 Adequate safeguards. (a) In addition to the election safeguards discussed in this part, the Act contains a general mandate in section...
29 CFR 452.110 - Adequate safeguards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 2 2011-07-01 2011-07-01 false Adequate safeguards. 452.110 Section 452.110 Labor... DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.110 Adequate safeguards. (a) In addition to the election safeguards discussed in this part, the Act contains a general mandate in section...
Random dynamic load identification based on error analysis and weighted total least squares method
NASA Astrophysics Data System (ADS)
Jia, You; Yang, Zhichun; Guo, Ning; Wang, Le
2015-12-01
In most cases, random dynamic load identification problems in structural dynamics are in general ill-posed. A common approach to treat these problems is to reformulate these problems into some well-posed problems by some numerical regularization methods. In a previous paper by the authors, a random dynamic load identification model was built, and a weighted regularization approach based on the proper orthogonal decomposition (POD) was proposed to identify the random dynamic loads. In this paper, the upper bound of relative load identification error in frequency domain is derived. The selection condition and the specific form of the weighting matrix is also proposed and validated analytically and experimentally, In order to improve the accuracy of random dynamic load identification, a weighted total least squares method is proposed to reduce the impact of these errors. To further validate the feasibility and effectiveness of the proposed method, the comparative study of the proposed method and other methods are conducted with the experiment. The experimental results demonstrated that the weighted total least squares method is more effective than other methods for random dynamic load identification.
Reduction method for intrinsic random coincidence events from (176)Lu in low activity PET imaging.
Yoshida, Eiji; Tashima, Hideaki; Nishikido, Fumihiko; Murayama, Hideo; Yamaya, Taiga
2014-07-01
For clinical studies, the effects of the intrinsic radioactivity of lutetium-based scintillators such as LSO used in PET imaging can be ignored within a narrow energy window. However, the intrinsic radioactivity becomes problematic when used in low-count-rate situations such as gene expression imaging or in-beam PET imaging. Time-of-flight (TOF) measurement capability promises not only to improve PET image quality, but also to reduce intrinsic random coincidences. On the other hand, we have developed a new reduction method for intrinsic random coincidence events based on multiple-coincidence information. Without the energy window, an intrinsic random coincidence is detected simultaneously with an intrinsic true coincidence as a multiple coincidence. The multiple-coincidence events can serve as a guide to identification of the intrinsic coincidences. After rejection of multiple-coincidence events detected with a wide energy window, data obtained included a few intrinsic random and many intrinsic true coincidence events. We analyzed the effect of intrinsic radioactivity and used Monte Carlo simulation to test both the TOF-based method and the developed multiple-coincidence-based (MC-based) method for a whole-body LSO-PET scanner. Using the TOF- and MC-based reduction methods separately, we could reduce the intrinsic random coincidence rates by 77 and 30 %, respectively. Also, the intrinsic random coincidence rate could be reduced by 84 % when the TOF+MC reduction methods were applied. The developed MC-based method showed reduced number of the intrinsic random coincidence events, but the reduction performance was limited compared to that of the TOF-based reduction method. PMID:24496884
Americans Getting Adequate Water Daily, CDC Finds
... medlineplus/news/fullstory_158510.html Americans Getting Adequate Water Daily, CDC Finds Men take in an average ... new government report finds most are getting enough water each day. The data, from the U.S. National ...
Americans Getting Adequate Water Daily, CDC Finds
... gov/news/fullstory_158510.html Americans Getting Adequate Water Daily, CDC Finds Men take in an average ... new government report finds most are getting enough water each day. The data, from the U.S. National ...
Safety assessment of a shallow foundation using the random finite element method
NASA Astrophysics Data System (ADS)
Zaskórski, Łukasz; Puła, Wojciech
2015-04-01
A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical
Alternative exact method for random walks on finite and periodic lattices with traps
NASA Astrophysics Data System (ADS)
Soler, Jose M.
1982-07-01
An alternative general method for random walks in finite or periodic lattices with traps is presented. The method gives, in a straightforward manner and in very little computing time, the exact probability that a random walker, starting from a given site, will undergo n steps before trapping. Another version gives the probability that the walker is at any other given position after n steps. The expected walk lengths calculated for simple lattices agree exactly with those given by a previous exact method by Walsh and Kozak.
An overhang-based DNA block shuffling method for creating a customized random library
Fujishima, Kosuke; Venter, Chris; Wang, Kendrick; Ferreira, Raphael; Rothschild, Lynn J.
2015-01-01
We present an overhang-based DNA block shuffling method to create a customized random DNA library with flexible sequence design and length. Our method enables the efficient and seamless assembly of short DNA blocks with dinucleotide overhangs through a simple ligation process. Next generation sequencing analysis of the assembled DNA library revealed that ligation was accurate, directional and unbiased. This straightforward DNA assembly method should fulfill the versatile needs of both in vivo and in vitro functional screening of random peptides and RNA created with a desired amino acid and nucleotide composition, as well as making highly repetitive gene constructs that are difficult to synthesize de novo. PMID:26010273
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
NASA Astrophysics Data System (ADS)
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
NASA Astrophysics Data System (ADS)
Urano, Ryo; Okamoto, Yuko
2015-12-01
We propose a replica-exchange method (REM) which does not use pseudo random numbers. For this purpose, we first give a conditional probability for Gibbs sampling replica-exchange method (GSREM) based on the heat bath method. In GSREM, replica exchange is performed by conditional probability based on the weight of states using pseudo random numbers. From the conditional probability, we propose a new method called deterministic replica-exchange method (DETREM) that produces thermal equilibrium distribution based on a differential equation instead of using pseudo random numbers. This method satisfies the detailed balance condition using a conditional probability of Gibbs heat bath method and thus results can reproduce the Boltzmann distribution within the condition of the probability. We confirmed that the equivalent results were obtained by REM and DETREM with two-dimensional Ising model. DETREM can avoid problems of choice of seeds in pseudo random numbers for parallel computing of REM and gives analytic method for REM using a differential equation.
Random projection-based dimensionality reduction method for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Feng, Weiyi; Chen, Qian; He, Weiji; Arce, Gonzalo R.; Gu, Guohua; Zhuang, Jiayan
2015-09-01
Dimensionality reduction is a frequent preprocessing step in hyperspectral image analysis. High-dimensional data will cause the issue of the "curse of dimensionality" in the applications of hyperspectral imagery. In this paper, a dimensionality reduction method of hyperspectral images based on random projection (RP) for target detection was investigated. In the application areas of hyperspectral imagery, e.g. target detection, the high dimensionality of the hyperspectral data would lead to burdensome computations. Random projection is attractive in this area because it is data independent and computationally more efficient than other widely-used hyperspectral dimensionality-reduction methods, such as Principal Component Analysis (PCA) or the maximum-noise-fraction (MNF) transform. In RP, the original highdimensional data is projected onto a low dimensional subspace using a random matrix, which is very simple. Theoretical and experimental results indicated that random projections preserved the structure of the original high-dimensional data quite well without introducing significant distortion. In the experiments, Constrained Energy Minimization (CEM) was adopted as the target detector and a RP-based CEM method for hyperspectral target detection was implemented to reveal that random projections might be a good alternative as a dimensionality reduction tool of hyperspectral images to yield improved target detection with higher detection accuracy and lower computation time than other methods.
Anderson, Deverick J; Juthani-Mehta, Manisha; Morgan, Daniel J
2016-06-01
Randomized controlled trials (RCT) produce the strongest level of clinical evidence when comparing interventions. RCTs are technically difficult, costly, and require specific considerations including the use of patient- and cluster-level randomization and outcome selection. In this methods paper, we focus on key considerations for RCT methods in healthcare epidemiology and antimicrobial stewardship (HE&AS) research, including the need for cluster randomization, conduct at multiple sites, behavior modification interventions, and difficulty with identifying appropriate outcomes. We review key RCTs in HE&AS with a focus on advantages and disadvantages of methods used. A checklist is provided to aid in the development of RCTs in HE&AS. Infect Control Hosp Epidemiol 2016;37:629-634. PMID:27108848
Chertkov, Michael; Gabitov, Ildar
2004-03-02
The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.
Plackett-Burman randomization method for Bacterial Ghosts preparation form E. coli JM109.
Amro, Amara A; Salem-Bekhit, Mounir M; Alanazi, Fars K
2014-07-01
Plackett-Burman randomization method is a conventional tool for variables randomization aiming at optimization. Bacterial Ghosts (BGs) preparation has been recently established using methods other than the E lysis gene. The protocol has been based mainly on using critical concentrations from chemical compounds able to convert viable cells to BGs. The Minimum Inhibition Concentration (MIC) and the Minimum Growth Concentration (MGC) were the main guide for the BGs preparation. In this study, Escherichia coli JM109 DEC has been used to produce the BGs following the original protocol. The study contained a detail protocol for BGs preparation that could be used as a guide. PMID:25061413
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-06-01
In this paper, I introduce a novel approach to modelling the individual random component (also called the intra-event uncertainty) of a ground-motion relation (GMR), as well as a novel approach to estimating the corresponding parameters. In essence, I contend that the individual random component is reproduced adequately by a simple stochastic mechanism of random impulses acting in the horizontal plane, with random directions. The random number of impulses was Poisson distributed. The parameters of the model were estimated according to a proposal by Raschke J Seismol 17(4):1157-1182, (2013a), with the sample of random difference ξ = ln(Y 1 )-ln(Y 2 ), in which Y 1 and Y 2 are the horizontal components of local ground-motion intensity. Any GMR element was eliminated by subtraction, except the individual random components. In the estimation procedure, the distribution of difference ξ was approximated by combining a large Monte Carlo simulated sample and Kernel smoothing. The estimated model satisfactorily fitted the difference ξ of the sample of peak ground accelerations, and the variance of the individual random components was considerably smaller than that of conventional GMRs. In addition, the dependence of variance on the epicentre distance was considered; however, a dependence of variance on the magnitude was not detected. Finally, the influence of the novel model and the corresponding approximations on PSHA was researched. The applied approximations of distribution of the individual random component were satisfactory for the researched example of PSHA.
Characterization of a random anisotropic conductivity field with Karhunen-Loeve methods
Cherry, Matthew R.; Sabbagh, Harold S.; Pilchak, Adam L.; Knopp, Jeremy S.
2014-02-18
While parametric uncertainty quantification for NDE models has been addressed in recent years, the problem of stochastic field parameters such as spatially distributed electrical conductivity has only been investigated minimally in the last year. In that work, the authors treated the field as a one-dimensional random process and Karhunen-Loeve methods were used to discretize this process to make it amenable to UQ methods such as ANOVA expansions. In the present work, we will treat the field as a two dimensional random process, and the eigenvalues and eigenfunctions of the integral operator will be determined via Galerkin methods. The Karhunen-Loeve methods is extended to two dimensions and implemented to represent this process. Several different choices for basis functions will be discussed, as well as convergence criteria for each. The methods are applied to correlation functions collected over electron backscatter data from highly micro textured Ti-7Al.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Pates, Carl S., III
1994-01-01
A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.
Adequate supervision for children and adolescents.
Anderst, James; Moffatt, Mary
2014-11-01
Primary care providers (PCPs) have the opportunity to improve child health and well-being by addressing supervision issues before an injury or exposure has occurred and/or after an injury or exposure has occurred. Appropriate anticipatory guidance on supervision at well-child visits can improve supervision of children, and may prevent future harm. Adequate supervision varies based on the child's development and maturity, and the risks in the child's environment. Consideration should be given to issues as wide ranging as swimming pools, falls, dating violence, and social media. By considering the likelihood of harm and the severity of the potential harm, caregivers may provide adequate supervision by minimizing risks to the child while still allowing the child to take "small" risks as needed for healthy development. Caregivers should initially focus on direct (visual, auditory, and proximity) supervision of the young child. Gradually, supervision needs to be adjusted as the child develops, emphasizing a safe environment and safe social interactions, with graduated independence. PCPs may foster adequate supervision by providing concrete guidance to caregivers. In addition to preventing injury, supervision includes fostering a safe, stable, and nurturing relationship with every child. PCPs should be familiar with age/developmentally based supervision risks, adequate supervision based on those risks, characteristics of neglectful supervision based on age/development, and ways to encourage appropriate supervision throughout childhood. PMID:25369578
Small Rural Schools CAN Have Adequate Curriculums.
ERIC Educational Resources Information Center
Loustaunau, Martha
The small rural school's foremost and largest problem is providing an adequate curriculum for students in a changing world. Often the small district cannot or is not willing to pay the per-pupil cost of curriculum specialists, specialized courses using expensive equipment no more than one period a day, and remodeled rooms to accommodate new…
Funding the Formula Adequately in Oklahoma
ERIC Educational Resources Information Center
Hancock, Kenneth
2015-01-01
This report is a longevity, simulational study that looks at how the ratio of state support to local support effects the number of school districts that breaks the common school's funding formula which in turns effects the equity of distribution to the common schools. After nearly two decades of adequately supporting the funding formula, Oklahoma…
Random Qualitative Validation: A Mixed-Methods Approach to Survey Validation
ERIC Educational Resources Information Center
Van Duzer, Eric
2012-01-01
The purpose of this paper is to introduce the process and value of Random Qualitative Validation (RQV) in the development and interpretation of survey data. RQV is a method of gathering clarifying qualitative data that improves the validity of the quantitative analysis. This paper is concerned with validity in relation to the participants'…
ERIC Educational Resources Information Center
Eisenkopf, Gerald; Sulser, Pascal A.
2016-01-01
The authors present results from a comprehensive field experiment at Swiss high schools in which they compare the effectiveness of teaching methods in economics. They randomly assigned classes into an experimental and a conventional teaching group, or a control group that received no specific instruction. Both teaching treatments improve economic…
Andridge, Rebecca. R.; Shoben, Abigail B.; Muller, Keith E.; Murray, David M.
2014-01-01
Participants in trials may be randomized either individually or in groups, and may receive their treatment either entirely individually, entirely in groups, or partially individually and partially in groups. This paper concerns cases in which participants receive their treatment either entirely or partially in groups, regardless of how they were randomized. Participants in Group-Randomized Trials (GRTs) are randomized in groups and participants in Individually Randomized Group Treatment (IRGT) trials are individually randomized, but participants in both types of trials receive part or all of their treatment in groups or through common change agents. Participants who receive part or all of their treatment in a group are expected to have positively correlated outcome measurements. This paper addresses a situation that occurs in GRTs and IRGT trials – participants receive treatment through more than one group. As motivation, we consider trials in The Childhood Obesity Prevention and Treatment Research Consortium (COPTR), in which each child participant receives treatment in at least two groups. In simulation studies we considered several possible analytic approaches over a variety of possible group structures. A mixed model with random effects for both groups provided the only consistent protection against inflated type I error rates and did so at the cost of only moderate loss of power when intraclass correlations were not large. We recommend constraining variance estimates to be positive and using the Kenward-Roger adjustment for degrees of freedom; this combination provided additional power but maintained type I error rates at the nominal level. PMID:24399701
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
A Ground State Method for Continuum Systems Using Random Walks in the Space of Slater Determinants.^
NASA Astrophysics Data System (ADS)
Zhang, Shiwei; Krakauer, Henry
2001-03-01
We study a ground state quantum Monte Carlo method for electronic systems. The method is based on the constrained path Monte Carlo approach(S. Zhang, J. Carlson, and J. E. Gubernatis, Phys. Rev. B 55), 7464 (1997). developed for lattice models of correlated electrons. It works in second-quantized form and uses random walks involving full Slater determinants rather than individual real-space configurations. The method allows easy calculation of expectation values and also makes it straightforward to import standard techniques (e.g., pseudopotentials) used in density functional and quantum chemistry calculations. In general, Slater determinants will acquire overall complex phases, due to the Hubbard-Stratonovich transformation of the two-body potential. In order to control the sign decay, an approximation is developed for the propagation of complex Slater determinants by random walks. We test the method in a homogeneous 3-D electron gas (jellium) using a planewave basis. ^ Supported by NSF, ONR and Research Corporation.
A shortcut through the Coulomb gas method for spectral linear statistics on random matrices
NASA Astrophysics Data System (ADS)
Deelan Cunden, Fabio; Facchi, Paolo; Vivo, Pierpaolo
2016-04-01
In the last decade, spectral linear statistics on large dimensional random matrices have attracted significant attention. Within the physics community, a privileged role has been played by invariant matrix ensembles for which a two-dimensional Coulomb gas analogy is available. We present a critical revision of the Coulomb gas method in random matrix theory (RMT) borrowing language and tools from large deviations theory. This allows us to formalize an equivalent, but more effective and quicker route toward RMT free energy calculations. Moreover, we argue that this more modern viewpoint is likely to shed further light on the interesting issues of weak phase transitions and evaporation phenomena recently observed in RMT.
An efficient method for calculating RMS von Mises stress in a random vibration environment
Segalman, D.J.; Fulcher, C.W.G.; Reese, G.M.; Field, R.V. Jr.
1998-02-01
An efficient method is presented for calculation of RMS von Mises stresses from stress component transfer functions and the Fourier representation of random input forces. An efficient implementation of the method calculates the RMS stresses directly from the linear stress and displacement modes. The key relation presented is one suggested in past literature, but does not appear to have been previously exploited in this manner.
An efficient method for calculating RMS von Mises stress in a random vibration environment
Segalman, D.J.; Fulcher, C.W.G.; Reese, G.M.; Field, R.V. Jr.
1997-12-01
An efficient method is presented for calculation of RMS von Mises stresses from stress component transfer functions and the Fourier representation of random input forces. An efficient implementation of the method calculates the RMS stresses directly from the linear stress and displacement modes. The key relation presented is one suggested in past literature, but does not appear to have been previously exploited in this manner.
Methods for testing theory and evaluating impact in randomized field trials
Brown, C. Hendricks; Wang, Wei; Kellam, Sheppard G.; Muthén, Bengt O.; Petras, Hanno; Toyinbo, Peter; Poduska, Jeanne; Ialongo, Nicholas; Wyman, Peter A.; Chamberlain, Patricia; Sloboda, Zili; MacKinnon, David P.; Windham, Amy
2008-01-01
Randomized field trials provide unique opportunities to examine the effectiveness of an intervention in real world settings and to test and extend both theory of etiology and theory of intervention. These trials are designed not only to test for overall intervention impact but also to examine how impact varies as a function of individual level characteristics, context, and across time. Examination of such variation in impact requires analytical methods that take into account the trial’s multiple nested structure and the evolving changes in outcomes over time. The models that we describe here merge multilevel modeling with growth modeling, allowing for variation in impact to be represented through discrete mixtures—growth mixture models—and nonparametric smooth functions—generalized additive mixed models. These methods are part of an emerging class of multilevel growth mixture models, and we illustrate these with models that examine overall impact and variation in impact. In this paper, we define intent-to-treat analyses in group-randomized multilevel field trials and discuss appropriate ways to identify, examine, and test for variation in impact without inflating the Type I error rate. We describe how to make causal inferences more robust to misspecification of covariates in such analyses and how to summarize and present these interactive intervention effects clearly. Practical strategies for reducing model complexity, checking model fit, and handling missing data are discussed using six randomized field trials to show how these methods may be used across trials randomized at different levels. PMID:18215473
Multi-objective optimization by a new hybridized method: applications to random mechanical systems
NASA Astrophysics Data System (ADS)
Zidani, H.; Pagnacco, E.; Sampaio, R.; Ellaia, R.; Souza de Cursi, J. E.
2013-08-01
In this article two linear problems with random Gaussian loading are transformed into multi-objective optimization problems. The first problem is the design of a pillar geometry with respect to a compressive random load process. The second problem is the design of a truss structure with respect to a vertical random load process for several frequency bands. A new algorithm, motivated by the Pincus representation formula hybridized with the Nelder-Mead algorithm, is proposed to solve the two multi-objective optimization problems. To generate the Pareto curve, the normal boundary intersection method is used to produce a series of constrained single-objective optimizations. The second problem, depending on the frequency band of excitation, can have as Pareto curve a single point, a standard Pareto curve, or a discontinuous Pareto curve, a fact that has been reported here for the first time in the literature, to the best of the authors' knowledge.
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Local search methods based on variable focusing for random K-satisfiability.
Lemoy, Rémi; Alava, Mikko; Aurell, Erik
2015-01-01
We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed. PMID:25679737
Local search methods based on variable focusing for random K -satisfiability
NASA Astrophysics Data System (ADS)
Lemoy, Rémi; Alava, Mikko; Aurell, Erik
2015-01-01
We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.
NASA Astrophysics Data System (ADS)
Wu, Zhizhang; Huang, Zhongyi
2016-07-01
In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.
On analysis-based two-step interpolation methods for randomly sampled seismic data
NASA Astrophysics Data System (ADS)
Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao
2013-02-01
Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.
Implementation of a random displacement method (RDM) in the ADPIC model framework
Ermak, D.L.; Nasstrom, J.S.; Taylor, A.G.
1995-06-01
The objective of this work was to implement a 3-D Lagrangian stochastic (also called random walk or Monte Carlo) diffusion method in the framework of the operational ADPIC (Atmospheric Diffusion Particle-In-Cell) code. The Random Displacement Method, RDM, presented here and implemented in the ADPIC code, calculates atmospheric dispersion in a purely Lagrangian, grid-independent manner. Some of the benefits of this approach compared to the previously-used ``particle-in-cell, gradient diffusion`` method are (a) a sub-grid diffusion approximation is no longer needed, (b) numerical accuracy of the diffusion calculation is improved because particle displacement does not depend on the resolution of the Eulerian grid used to calculate species concentration, and (c) adaptation to other grid structures for the input wind field does not affect the diffusion calculation. In addition, the RDM incorporates a unique and accurate treatment of particle interaction with the surface.
Randomized gradient-free method for multiagent optimization over time-varying networks.
Yuan, Deming; Ho, Daniel W C
2015-06-01
In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent's objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method. PMID:25099738
Western, J. Sylvia; Dicksit, Daniel Devaprakash
2016-01-01
Aim of this Study: The aim was to evaluate the efficiency of different sterilization methods on extracted human teeth (EHT) by a systematic review of in vitro randomized controlled trials. Methodology: An extensive electronic database literature search concerning the sterilization of EHT was conducted. The search terms used were “human teeth, sterilization, disinfection, randomized controlled trials, and infection control.” Randomized controlled trials which aim at comparing the efficiency of different methods of sterilization of EHT were all included in this systematic review. Results: Out of 1618 articles obtained, eight articles were selected for this systematic review. The sterilization methods reviewed were autoclaving, 10% formalin, 5.25% sodium hypochlorite, 3% hydrogen peroxide, 2% glutaraldehyde, 0.1% thymol, and boiling to 100°C. Data were extracted from the selected individual studies and their findings were summarized. Conclusion: Autoclaving and 10% formalin can be considered as 100% efficient and reliable methods. While the use of 5.25% sodium hypochlorite, 3% hydrogen peroxide, 2% glutaraldehyde, 0.1% thymol, and boiling to 100°C was inefficient and unreliable methods of sterilization of EHT. PMID:27563183
Chaussé, Pierre; Liu, Jin; Luta, George
2016-01-01
Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. PMID:27077870
A new Lagrangian random choice method for steady two-dimensional supersonic/hypersonic flow
NASA Technical Reports Server (NTRS)
Loh, C. Y.; Hui, W. H.
1991-01-01
Glimm's (1965) random choice method has been successfully applied to compute steady two-dimensional supersonic/hypersonic flow using a new Lagrangian formulation. The method is easy to program, fast to execute, yet it is very accurate and robust. It requires no grid generation, resolves slipline and shock discontinuities crisply, can handle boundary conditions most easily, and is applicable to hypersonic as well as supersonic flow. It represents an accurate and fast alternative to the existing Eulerian methods. Many computed examples are given.
NASA Technical Reports Server (NTRS)
Grosse, Ralf
1990-01-01
Propagation of sound through the turbulent atmosphere is a statistical problem. The randomness of the refractive index field causes sound pressure fluctuations. Although no general theory to predict sound pressure statistics from given refractive index statistics exists, there are several approximate solutions to the problem. The most common approximation is the parabolic equation method. Results obtained by this method are restricted to small refractive index fluctuations and to small wave lengths. While the first condition is generally met in the atmosphere, it is desirable to overcome the second. A generalization of the parabolic equation method with respect to the small wave length restriction is presented.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
NASA Astrophysics Data System (ADS)
Liao, Qifeng; Lin, Guang
2016-07-01
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network
NASA Technical Reports Server (NTRS)
Kuhn, D. Richard; Kacker, Raghu; Lei, Yu
2010-01-01
This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.
NASA Astrophysics Data System (ADS)
Cao, Chao; Chen, Ru-jun
2009-10-01
In this paper, we present a new copyright information hide method for digital images in Moiré fringe formats. The copyright information is embedded into the protected image and the detecting image based on Fresnel phase matrix. Firstly, using Fresnel diffraction transform, the random phase matrix of copyright information is generated. Then, according to Moiré fringe principle, the protected image and the detecting image are modulated respectively based on the random phase matrix, and the copyright information is embedded into them. When the protected image and the detecting image are overlapped, the copyright information can reappear. Experiment results show that our method has good concealment performance, and is a new way for copyright protection.
Research on text encryption and hiding method with double-random phase-encoding
NASA Astrophysics Data System (ADS)
Xu, Hongsheng; Sang, Nong
2013-10-01
By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2- dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.
Chosen-plaintext attack on double-random-phase-encoding-based image hiding method
NASA Astrophysics Data System (ADS)
Xu, Hongsheng; Li, Guirong; Zhu, Xianchen
2015-12-01
By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2- dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.
A numerical method for reducing the random noise in a two-dimensional waveform
Levy, A.J.
1991-01-23
This invention is comprised of a method for reducing random noise in a two-dimensional waveform having an irregular curvature includes the steps of selecting a plurality of points initially positioned at preselected locations on the waveform. For each point selected, the straight line is found which connects it to the midpoint between its neighboring points. A new location for the point is calculated to lie on the straight line a fraction of the distance between the initial location of the point and the midpoint. This process is repeated for each point positioned on the waveform. After a single iteration of the method is completed, the entire process is repeated a predetermined number of times to identify final calculated locations for the plurality of points selected. The final calculated locations of the points are then connected to form a relatively random noise-free waveform having a substantially smooth curvature.
Key management of the double random-phase-encoding method using public-key encryption
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2010-03-01
Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.
Modulation transfer function of a lens measured with a random target method.
Levy, E; Peles, D; Opher-Lipson, M; Lipson, S G
1999-02-01
We measured the modulation transfer function (MTF) of a lens in the visible region using a random test target generated on a computer screen. This is a simple method to determine the entire MTF curve in one measurement. The lens was obscured by several masks so that the measurements could be compared with the theoretically calculated MTF. Excellent agreement was obtained. Measurement noise was reduced by use of a large number of targets generated on the screen. PMID:18305663
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
Esserman, Denise; Allore, Heather G.; Travison, Thomas G.
2016-01-01
Cluster-randomized clinical trials (CRT) are trials in which the unit of randomization is not a participant but a group (e.g. healthcare systems or community centers). They are suitable when the intervention applies naturally to the cluster (e.g. healthcare policy); when lack of independence among participants may occur (e.g. nursing home hygiene); or when it is most ethical to apply an intervention to all within a group (e.g. school-level immunization). Because participants in the same cluster receive the same intervention, CRT may approximate clinical practice, and may produce generalizable findings. However, when not properly designed or interpreted, CRT may induce biased results. CRT designs have features that add complexity to statistical estimation and inference. Chief among these is the cluster-level correlation in response measurements induced by the randomization. A critical consideration is the experimental unit of inference; often it is desirable to consider intervention effects at the level of the individual rather than the cluster. Finally, given that the number of clusters available may be limited, simple forms of randomization may not achieve balance between intervention and control arms at either the cluster- or participant-level. In non-clustered clinical trials, balance of key factors may be easier to achieve because the sample can be homogenous by exclusion of participants with multiple chronic conditions (MCC). CRTs, which are often pragmatic, may eschew such restrictions. Failure to account for imbalance may induce bias and reducing validity. This article focuses on the complexities of randomization in the design of CRTs, such as the inclusion of patients with MCC, and imbalances in covariate factors across clusters.
NASA Astrophysics Data System (ADS)
Shrestha, R. K.; Tachikawa, Y.; Takara, K.
2003-04-01
The simulation of spatial rainfall field based on non-homogenous random cascade method disaggregates a regionally averaged rainfall such as the GCM output. The cascade-generators are used to disaggregate and produce spatial patterns across the region (Over and Gupta, 1996; Chatchai et al. 2000; Tachikawa et al. 2003). However, the disaggregated data is rarely used to produce discharge by using distributed hydrological model. The hesitation to use disaggregated GCM data in discharge simulation is mainly due to lower reliability to reproduce spatial pattern and higher chance of magnitude fluctuation in a few trials of disaggregation. Long term disaggregation results, which are expected to produce true spatial pattern, may not be convenient for practical discharge simulation. A modified method is tested by keeping the volume balanced and forcing the location of cascade generators on the basis of spatial correlation of rainfall field with respect to surround regions. In this method, a reference matrix is prepared, which is calculated for every target grid by summing the multiplication of rainfall magnitude and spatial correlation coefficient of the respective reference grids. The reference matrix is used to adjust the location of random generator in two ways -- hierarchically and statistically. So, this method is designated as Hierarchical and Statistical Adjustment (HSA) method. The HSA method preserves the magnitude of random cascade generators but modifies the location. Unlike the previous non-homogenous random cascade method, this method produced similar spatial patterns as that of ground truth in every realization, which is a clear indication of improved reliability of the disaggregation method from coarse GCM output to a finer resolution as demanded by the hydrological model. The forced volume balance may be justified from the engineering aspect to maintain the same input quantity of rainfall in a watershed for hydrologic simulation purpose. The downscaled data
Zhu, Zhiwei; Zhou, Xiaoqin; Luo, Dan; Liu, Qiang
2013-11-18
In this paper, a novel pseudo-random diamond turning (PRDT) method is proposed for the fabrication of freeform optics with scattering homogenization by means of actively eliminating the inherent periodicity of the residual tool marks. The strategy for accurately determining the spiral toolpath with pseudo-random vibration modulation is deliberately explained. Spatial geometric calculation method is employed to determine the toolpath in consideration of cutting tool geometries, and an iteration algorithm is further introduced to enhance the computation efficiency. Moreover, a novel two degree of freedom fast tool servo (2-DOF FTS) system with decoupled motions is developed to implement the PRDT method. Taking advantage of a novel surface topography generation algorithm, theoretical surfaces generated by using the calculated toolpaths are obtained, the accuracy of the toolpath generation and the efficiency of the PRDT method for breaking up the inherent periodicity of tool marks are examined. A series of preliminary cutting experiments are carried out to verify the efficiency of the proposed PRDT method, the experimental results obtained are in good agreement with the results obtained by numerical simulation. In addition, the results of scattering experiments indicate that the proposed PRDT method will be a very promising technique to achieve the scattering homogenization of machined surfaces with complicated shapes. PMID:24514359
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
Berco, Dan Tseng, Tseung-Yuen
2015-12-21
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
NASA Astrophysics Data System (ADS)
Berco, Dan; Tseng, Tseung-Yuen
2015-12-01
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO2 device with a double layer ZnO/ZrO2 one, and obtain results which are in good agreement with experimental data.
Quantum Monte Carlo method using phase-free random walks with slater determinants.
Zhang, Shiwei; Krakauer, Henry
2003-04-01
We develop a quantum Monte Carlo method for many fermions using random walks in the space of Slater determinants. An approximate approach is formulated with a trial wave function |Psi(T)> to control the phase problem. Using a plane-wave basis and nonlocal pseudopotentials, we apply the method to Be, Si, and P atoms and dimers, and to bulk Si supercells. Single-determinant wave functions from density functional theory calculations were used as |Psi(T)> with no additional optimization. The calculated binding energies of dimers and cohesive energy of bulk Si are in excellent agreement with experiments and are comparable to the best existing theoretical results. PMID:12689312
Low-noise multiple watermarks technology based on complex double random phase encoding method
NASA Astrophysics Data System (ADS)
Zheng, Jihong; Lu, Rongwen; Sun, Liujie; Zhuang, Songlin
2010-11-01
Based on double random phase encoding method (DRPE), watermarking technology may provide a stable and robust method to protect the copyright of the printing. However, due to its linear character, DRPE exist the serious safety risk when it is attacked. In this paper, a complex coding method, which means adding the chaotic encryption based on logistic mapping before the DRPE coding, is provided and simulated. The results testify the complex method will provide better security protection for the watermarking. Furthermore, a low-noise multiple watermarking is studied, which means embedding multiple watermarks into one host printing and decrypt them with corresponding phase keys individually. The Digital simulation and mathematic analysis show that with the same total embedding weight factor, multiply watermarking will improve signal noise ratio (SNR) of the output printing image significantly. The complex multiply watermark method may provide a robust, stability, reliability copyright protection with higher quality printing image.
A finite element method for the statistics of non-linear random vibration
NASA Astrophysics Data System (ADS)
Langley, R. S.
1985-07-01
The transitional probability density function for the random response of a certain class of non-linear system satisfies the Fokker-Planck-Kolmogorov equation. This paper concerns the numerical solution of the stationary form of this equation, yielding the stationary probability density function of response. The weighted residual statement for the problem is integrated by parts to yield the weak form of the equations, which are then solved by the finite element method. The method is applied to a Duffing oscillator and good agreement is found with the exact result, and the method is compared favourably with a Galerkin solution method given by Bhandari and Sherrer [1]. Also, the method is applied to the ship rolling problem and good agreement is found with an approximate analytical result due to Roberts [2].
Random fields generation on the GPU with the spectral turning bands method
NASA Astrophysics Data System (ADS)
Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.
2014-08-01
Random field (RF) generation algorithms are of paramount importance for many scientific domains, such as astrophysics, geostatistics, computer graphics and many others. Some examples are the generation of initial conditions for cosmological simulations or hydrodynamical turbulence driving. In the latter a new random field is needed every time-step. Current approaches commonly make use of 3D FFT (Fast Fourier Transform) and require the whole generated field to be stored in memory. Moreover, they are limited to regular rectilinear meshes and need an extra processing step to support non-regular meshes. In this paper, we introduce TBARF (Turning BAnd Random Fields), a RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs. Our algorithm replaces the 3D FFT with a lower order, one-dimensional FFT followed by a projection step, and is further optimized with loop unrolling and blocking. We show that TBARF can easily generate RF on non-regular (non uniform) meshes and can afford mesh sizes bigger than the available GPU memory by using a streaming, out-of-core approach. TBARF is 2 to 5 times faster than the traditional methods when generating RFs with more than 16M cells. It can also generate RF on non-regular meshes, and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.
NASA Astrophysics Data System (ADS)
Dunne, L. W.; Dunne, J. F.
2009-04-01
An efficient frequency response function (FRF) bounding method is proposed using asymptotic extreme-value theory. The method exploits a small random sample of realised FRFs obtained from nominally identical structures to predict corresponding FRF bounds for a substantially larger batch. This is useful for predicting forced-vibration levels in automotive vehicle bodies when parameters are assumed to vary statistically. Small samples are assumed to come either from Monte Carlo simulation using a vibration model, or via measurements from real structures. The basis of the method is to undertake a hypothesis test and if justified, repeatedly fit inverted Type I asymptotic threshold exceedance models at discrete frequencies, for which the models are not locked to a block size (as in classical extreme-value models). The chosen FRF 'bound' is predicted from the inverse model in the form of the ' m-observational return level', namely the level that will be exceeded on average once in every m structures realised. The method is tested on simulated linear structures, initially to establish its scope and limitations. Initial testing is performed on a sdof system followed by small and medium-sized uncoupled mdof grillages. Testing then continues to: (i) a random acoustically coupled grillage structure; and (ii) a partially random industrial-scale box structure which exhibits similar dynamic characteristics to a small vehicle structure and is analysed in NASTRAN. In both cases, structural and acoustic responses to a single deterministic load are examined. The paper shows that the method is not suitable for very small uncoupled systems but rapidly becomes very appropriate for both uncoupled and coupled mdof structures.
2012-01-01
Background The genus Mycobacterium (M.) comprises highly pathogenic bacteria such as M. tuberculosis as well as environmental opportunistic bacteria called non-tuberculous mycobacteria (NTM). While the incidence of tuberculosis is declining in the developed world, infection rates by NTM are increasing. NTM are ubiquitous and have been isolated from soil, natural water sources, tap water, biofilms, aerosols, dust and sawdust. Lung infections as well as lymphadenitis are most often caused by M. avium subsp. hominissuis (MAH), which is considered to be among the clinically most important NTM. Only few virulence genes from M. avium have been defined among other things due to difficulties in generating M. avium mutants. More efforts in developing new methods for mutagenesis of M. avium and identification of virulence-associated genes are therefore needed. Results We developed a random mutagenesis method based on illegitimate recombination and integration of a Hygromycin-resistance marker. Screening for mutations possibly affecting virulence was performed by monitoring of pH resistance, colony morphology, cytokine induction in infected macrophages and intracellular persistence. Out of 50 randomly chosen Hygromycin-resistant colonies, four revealed to be affected in virulence-related traits. The mutated genes were MAV_4334 (nitroreductase family protein), MAV_5106 (phosphoenolpyruvate carboxykinase), MAV_1778 (GTP-binding protein LepA) and MAV_3128 (lysyl-tRNA synthetase LysS). Conclusions We established a random mutagenesis method for MAH that can be easily carried out and combined it with a set of phenotypic screening methods for the identification of virulence-associated mutants. By this method, four new MAH genes were identified that may be involved in virulence. PMID:22966811
NASA Astrophysics Data System (ADS)
Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun
2015-01-01
Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.
Risk Prediction Modeling of Sequencing Data Using a Forward Random Field Method
Wen, Yalu; He, Zihuai; Li, Ming; Lu, Qing
2016-01-01
With the advance in high-throughput sequencing technology, it is feasible to investigate the role of common and rare variants in disease risk prediction. While the new technology holds great promise to improve disease prediction, the massive amount of data and low frequency of rare variants pose great analytical challenges on risk prediction modeling. In this paper, we develop a forward random field method (FRF) for risk prediction modeling using sequencing data. In FRF, subjects’ phenotypes are treated as stochastic realizations of a random field on a genetic space formed by subjects’ genotypes, and an individual’s phenotype can be predicted by adjacent subjects with similar genotypes. The FRF method allows for multiple similarity measures and candidate genes in the model, and adaptively chooses the optimal similarity measure and disease-associated genes to reflect the underlying disease model. It also avoids the specification of the threshold of rare variants and allows for different directions and magnitudes of genetic effects. Through simulations, we demonstrate the FRF method attains higher or comparable accuracy over commonly used support vector machine based methods under various disease models. We further illustrate the FRF method with an application to the sequencing data obtained from the Dallas Heart Study. PMID:26892725
A Two-Stage Random Forest-Based Pathway Analysis Method
Chung, Ren-Hua; Chen, Ying-Erh
2012-01-01
Pathway analysis provides a powerful approach for identifying the joint effect of genes grouped into biologically-based pathways on disease. Pathway analysis is also an attractive approach for a secondary analysis of genome-wide association study (GWAS) data that may still yield new results from these valuable datasets. Most of the current pathway analysis methods focused on testing the cumulative main effects of genes in a pathway. However, for complex diseases, gene-gene interactions are expected to play a critical role in disease etiology. We extended a random forest-based method for pathway analysis by incorporating a two-stage design. We used simulations to verify that the proposed method has the correct type I error rates. We also used simulations to show that the method is more powerful than the original random forest-based pathway approach and the set-based test implemented in PLINK in the presence of gene-gene interactions. Finally, we applied the method to a breast cancer GWAS dataset and a lung cancer GWAS dataset and interesting pathways were identified that have implications for breast and lung cancers. PMID:22586488
Selamet Tierney, Elif Seda; Levine, Jami C.; Chen, Shan; Bradley, Timothy J.; Pearson, Gail D.; Colan, Steven D.; Sleeper, Lynn A.; Campbell, M. Jay; Cohen, Meryl S.; Backer, Julie De; Guey, Lin T.; Heydarian, Haleh; Lai, Wyman W.; Lewin, Mark B.; Marcus, Edward; Mart, Christopher R.; Pignatelli, Ricardo H.; Printz, Beth F.; Sharkey, Angela M.; Shirali, Girish S.; Srivastava, Shubhika; Lacro, Ronald V.
2013-01-01
Background The Pediatric Heart Network is conducting a large international randomized trial to compare aortic root growth and other cardiovascular outcomes in 608 subjects with Marfan syndrome randomized to receive atenolol or losartan for 3 years. The authors report here the echocardiographic methods and baseline echocardiographic characteristics of the randomized subjects, describe the interobserver agreement of aortic measurements, and identify factors influencing agreement. Methods Individuals aged 6 months to 25 years who met the original Ghent criteria and had body surface area–adjusted maximum aortic root diameter (ROOTmax) Z scores > 3 were eligible for inclusion. The primary outcome measure for the trial is the change over time in ROOTmax Z score. A detailed echocardiographic protocol was established and implemented across 22 centers, with an extensive training and quality review process. Results Interobserver agreement for the aortic measurements was excellent, with intraclass correlation coefficients ranging from 0.921 to 0.989. Lower interobserver percentage error in ROOTmax measurements was independently associated (model R2 = 0.15) with better image quality (P = .002) and later study reading date (P < .001). Echocardiographic characteristics of the randomized subjects did not differ by treatment arm. Subjects with ROOTmax Z scores ≥ 4.5 (36%) were more likely to have mitral valve prolapse and dilation of the main pulmonary artery and left ventricle, but there were no differences in aortic regurgitation, aortic stiffness indices, mitral regurgitation, or left ventricular function compared with subjects with ROOTmax Z scores < 4.5. Conclusions The echocardiographic methodology, training, and quality review process resulted in a robust evaluation of aortic root dimensions, with excellent reproducibility. PMID:23582510
A new method to model x-ray scattering from random rough surfaces
NASA Astrophysics Data System (ADS)
Zhao, Ping; Van Speybroeck, Leon P.
2003-03-01
This paper presents a method for modeling the X-ray scattering from random rough surfaces. An actual rough surface is (incompletely) described by its Power Spectral Density (PSD). For a given PSD, model surfaces with the same roughness as the actual surface are constructed by preserving the PSD amplitudes and assigning a random phase to each spectral component. Rays representing the incident wave are reflected from the model surface and projected onto a flat plane, which approximates the model surface, as outgoing rays and corrected for phase delays. The projected outgoing rays are then corrected for wave densities and redistributed onto an uniform grid where the model surface is constructed. The scattering is then calculated by taking the Fast Fourier Transform (FFT) of the resulting distribution. This method is generally applicable and is not limited to small scattering angles. It provides the correct asymmetrical scattering profile for grazing incident radiation. We apply this method to the mirrors of the Chandra X-ray Observatory and show the results. We also expect this method to be useful for other X-ray telescope missions.
Yamashita, Yasunobu; Ueda, Kazuki; Kawaji, Yuki; Tamura, Takashi; Itonaga, Masahiro; Yoshida, Takeichi; Maeda, Hiroki; Magari, Hirohito; Maekita, Takao; Iguchi, Mikitaka; Tamai, Hideyuki; Ichinose, Masao; Kato, Jun
2016-01-01
Background/Aims Transpapillary forceps biopsy is an effective diagnostic technique in patients with biliary stricture. This prospective study aimed to determine the usefulness of the wire-grasping method as a new technique for forceps biopsy. Methods Consecutive patients with biliary stricture or irregularities of the bile duct wall were randomly allocated to either the direct or wire-grasping method group. In the wire-grasping method, forceps in the duodenum grasps a guide-wire placed into the bile duct beforehand, and then, the forceps are pushed through the papilla without endoscopic sphincterotomy. In the direct method, forceps are directly pushed into the bile duct alongside a guide-wire. The primary endpoint was the success rate of obtaining specimens suitable for adequate pathological examination. Results In total, 32 patients were enrolled, and 28 (14 in each group) were eligible for analysis. The success rate was significantly higher using the wire-grasping method than the direct method (100% vs 50%, p=0.016). Sensitivity and accuracy for the diagnosis of cancer were comparable in patients with the successful procurement of biopsy specimens between the two methods (91% vs 83% and 93% vs 86%, respectively). Conclusions The wire-grasping method is useful for diagnosing patients with biliary stricture or irregularities of the bile duct wall. PMID:27021502
NASA Astrophysics Data System (ADS)
Pieczynska-Kozlowska, Joanna
2014-05-01
One of a geotechnical problem in the area of Wroclaw is an anthropogenic embankment layer delaying to the depth of 4-5m, arising as a result of historical incidents. In such a case an assumption of bearing capacity of strip footing might be difficult. The standard solution is to use a deep foundation or foundation soil replacement. However both methods generate significant costs. In the present paper the authors focused their attention on the influence of anthropogenic embankment variability on bearing capacity. Soil parameters were defined on the basis of CPT test and modeled as 2D anisotropic random fields and the assumption of bearing capacity were made according deterministic finite element methods. Many repeated of the different realizations of random fields lead to stable expected value of bearing capacity. The algorithm used to estimate the bearing capacity of strip footing was the random finite element method (e.g. [1]). In traditional approach of bearing capacity the formula proposed by [2] is taken into account. qf = c'Nc + qNq + 0.5γBN- γ (1) where: qf is the ultimate bearing stress, cis the cohesion, qis the overburden load due to foundation embedment, γ is the soil unit weight, Bis the footing width, and Nc, Nq and Nγ are the bearing capacity factors. The method of evaluation the bearing capacity of strip footing based on finite element method incorporate five parameters: Young's modulus (E), Poisson's ratio (ν), dilation angle (ψ), cohesion (c), and friction angle (φ). In the present study E, ν and ψ are held constant while c and φ are randomized. Although the Young's modulus does not affect the bearing capacity it governs the initial elastic response of the soil. Plastic stress redistribution is accomplished using a viscoplastic algorithm merge with an elastic perfectly plastic (Mohr - Coulomb) failure criterion. In this paper a typical finite element mesh was assumed with 8-node elements consist in 50 columns and 20 rows. Footings width B
NASA Astrophysics Data System (ADS)
da Silva, Roberto; Lamb, Luis C.; Lima, Eder C.; Dupont, Jairton
2012-01-01
We propose a foundational model to explain properties of the retention time distribution of particle transport in a random medium. These particles are captured and released by distributed theoretical plates in a random medium as in standard chromatography. Our approach differs from current models, since it is not based on simple random walks, but on a directed and coordinated movement of the particles whose retention time dispersion in the column is due to the imprisonment time of the particle spent in the theoretical plates. Given a pair of fundamental parameters (λc,λe) the capture and release probabilities, we use simple combinatorial methods to predict the Probability Distribution of the retention times. We have analyzed several distributions typically used in chromatographic peak fits. We show that a log-normal distribution with only two parameters describes with high accuracy chromatographic distributions typically used in experiments. This distribution show a better fit than distributions with a larger number of parameters, possibly allowing for better control of experimental data.
NASA Astrophysics Data System (ADS)
Nikšić, T.; Kralj, N.; Tutiš, T.; Vretenar, D.; Ring, P.
2013-10-01
A new implementation of the finite amplitude method (FAM) for the solution of the relativistic quasiparticle random-phase approximation (RQRPA) is presented, based on the relativistic Hartree-Bogoliubov (RHB) model for deformed nuclei. The numerical accuracy and stability of the FAM-RQRPA is tested in a calculation of the monopole response of 22O. As an illustrative example, the model is applied to a study of the evolution of monopole strength in the chain of Sm isotopes, including the splitting of the giant monopole resonance in axially deformed systems.
New high resolution Random Telegraph Noise (RTN) characterization method for resistive RAM
NASA Astrophysics Data System (ADS)
Maestro, M.; Diaz, J.; Crespo-Yepes, A.; Gonzalez, M. B.; Martin-Martinez, J.; Rodriguez, R.; Nafria, M.; Campabadal, F.; Aymerich, X.
2016-01-01
Random Telegraph Noise (RTN) is one of the main reliability problems of resistive switching-based memories. To understand the physics behind RTN, a complete and accurate RTN characterization is required. The standard equipment used to analyse RTN has a typical time resolution of ∼2 ms which prevents evaluating fast phenomena. In this work, a new RTN measurement procedure, which increases the measurement time resolution to 2 μs, is proposed. The experimental set-up, together with the recently proposed Weighted Time Lag (W-LT) method for the analysis of RTN signals, allows obtaining a more detailed and precise information about the RTN phenomenon.
Battiste, R.L.; Corum, J.M.; Ren, W.; Ruggles, M.B.
1999-06-01
This report provides recommended minimum test requirements are suggested test methods for establishing the durability properties and characteristics of candidate random-glass-fiber polymeric composites for automotive structural applications. The recommendations and suggestions are based on experience and results developed at Oak Ridge National Laboratory (ORNL) under a US Department of Energy Advanced Automotive Materials project entitled ''Durability of Lightweight Composite Structures,'' which is closely coordinated with the Automotive Composites Consortium. The report is intended as an aid to suppliers offering new structural composites for automotive applications and to testing organizations that are called on to characterize the composites.
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, A. M.; McGhee, D. S.
2003-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; McGhee, David S.
2004-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Astrophysics Data System (ADS)
Brown, A. M.; McGhee, D. S.
2003-02-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
A Novel Hepatocellular Carcinoma Image Classification Method Based on Voting Ranking Random Forests
Xia, Bingbing; Jiang, Huiyan; Liu, Huiling; Yi, Dehui
2016-01-01
This paper proposed a novel voting ranking random forests (VRRF) method for solving hepatocellular carcinoma (HCC) image classification problem. Firstly, in preprocessing stage, this paper used bilateral filtering for hematoxylin-eosin (HE) pathological images. Next, this paper segmented the bilateral filtering processed image and got three different kinds of images, which include single binary cell image, single minimum exterior rectangle cell image, and single cell image with a size of n⁎n. After that, this paper defined atypia features which include auxiliary circularity, amendment circularity, and cell symmetry. Besides, this paper extracted some shape features, fractal dimension features, and several gray features like Local Binary Patterns (LBP) feature, Gray Level Cooccurrence Matrix (GLCM) feature, and Tamura features. Finally, this paper proposed a HCC image classification model based on random forests and further optimized the model by voting ranking method. The experiment results showed that the proposed features combined with VRRF method have a good performance in HCC image classification problem. PMID:27293477
Is a vegetarian diet adequate for children.
Hackett, A; Nathan, I; Burgess, L
1998-01-01
The number of people who avoid eating meat is growing, especially among young people. Benefits to health from a vegetarian diet have been reported in adults but it is not clear to what extent these benefits are due to diet or to other aspects of lifestyles. In children concern has been expressed concerning the adequacy of vegetarian diets especially with regard to growth. The risks/benefits seem to be related to the degree of restriction of he diet; anaemia is probably both the main and the most serious risk but this also applies to omnivores. Vegan diets are more likely to be associated with malnutrition, especially if the diets are the result of authoritarian dogma. Overall, lacto-ovo-vegetarian children consume diets closer to recommendations than omnivores and their pre-pubertal growth is at least as good. The simplest strategy when becoming vegetarian may involve reliance on vegetarian convenience foods which are not necessarily superior in nutritional composition. The vegetarian sector of the food industry could do more to produce foods closer to recommendations. Vegetarian diets can be, but are not necessarily, adequate for children, providing vigilance is maintained, particularly to ensure variety. Identical comments apply to omnivorous diets. Three threats to the diet of children are too much reliance on convenience foods, lack of variety and lack of exercise. PMID:9670174
Shimobaba, Tomoyoshi; Makowski, Michał; Nagahama, Yuki; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Hasegawa, Satoki; Sano, Marie; Kakue, Takashi; Oikawa, Minoru; Sugie, Takashige; Takada, Naoki; Ito, Tomoyoshi
2016-05-20
We propose two calculation methods of generating color computer-generated holograms (CGHs) with the random phase-free method and color space conversion in order to improve the image quality and accelerate the calculation. The random phase-free method improves the image quality in monochrome CGH, but it is not performed in color CGH. We first aimed to improve the image quality of color CGH using the random phase-free method and then to accelerate the color CGH generation with a combination of the random phase-free method and color space conversion method, which accelerates the color CGH calculation due to down-sampling of the color components converted by color space conversion. To overcome the problem of image quality degradation that occurs due to the down-sampling of random phases, the combination of the random phase-free method and color space conversion method improves the quality of reconstructed images and accelerates the color CGH calculation. We demonstrated the effectiveness of the proposed method in simulation, and in this paper discuss its application to lensless zoomable holographic projection. PMID:27411145
Methods of learning in statistical education: Design and analysis of a randomized trial
NASA Astrophysics Data System (ADS)
Boyd, Felicity Turner
Background. Recent psychological and technological advances suggest that active learning may enhance understanding and retention of statistical principles. A randomized trial was designed to evaluate the addition of innovative instructional methods within didactic biostatistics courses for public health professionals. Aims. The primary objectives were to evaluate and compare the addition of two active learning methods (cooperative and internet) on students' performance; assess their impact on performance after adjusting for differences in students' learning style; and examine the influence of learning style on trial participation. Methods. Consenting students enrolled in a graduate introductory biostatistics course were randomized to cooperative learning, internet learning, or control after completing a pretest survey. The cooperative learning group participated in eight small group active learning sessions on key statistical concepts, while the internet learning group accessed interactive mini-applications on the same concepts. Controls received no intervention. Students completed evaluations after each session and a post-test survey. Study outcome was performance quantified by examination scores. Intervention effects were analyzed by generalized linear models using intent-to-treat analysis and marginal structural models accounting for reported participation. Results. Of 376 enrolled students, 265 (70%) consented to randomization; 69, 100, and 96 students were randomized to the cooperative, internet, and control groups, respectively. Intent-to-treat analysis showed no differences between study groups; however, 51% of students in the intervention groups had dropped out after the second session. After accounting for reported participation, expected examination scores were 2.6 points higher (of 100 points) after completing one cooperative learning session (95% CI: 0.3, 4.9) and 2.4 points higher after one internet learning session (95% CI: 0.0, 4.7), versus
A blind image detection method for information hiding with double random-phase encoding
NASA Astrophysics Data System (ADS)
Sheng, Yuan; Xin, Zhou; Jian-guo, Chen; Yong-liang, Xiao; Qiang, Liu
2009-07-01
In this paper, a blind image detection method based on a statistical hypothesis test for information hiding with double random-phase encoding (DRPE) is proposed. This method aims to establish a quantitative criterion which is used to judge whether there is secret information embedded in the detected image. The main process can be described as follows: at the beginning, we decompose the detected gray-scale image into 8 bit planes considering it has 256 gray levels, and suppose that a secret image has been hidden in the detected image after it was encrypted by DRPE, thus the lower bit planes of the detected image exhibit strong randomness. Then, we divide the bit plane to be tested into many windows, and establish a statistical variable to measure the relativity between pixels in every window. Finally, judge whether the secret image exists in the detected image by operating the t test on all statistical variables. Numerical simulation shows that the accuracy is quite satisfactory, when we need to distinguish the images carrying secret information from a large amount of images.
The simulation of the viscous flow around a cylinder by the random vortex method
NASA Astrophysics Data System (ADS)
Tiemroth, Erik Charles
The ability to calculate the loads on cylindrical members is of great importance in offshore engineering. The random vortex method (RVM) is applied to simulate steady, uniform incident flow over a cylinder and the flow about a cylinder in a free surface wave field. Tent functions are used to model the vortex sheets at the boundary and a Rankine core function is used for the vortex blobs. Second-order accurate time integration and random walk diffusion algorithms are used. One of the most difficult problems with the RVM is the computational requirements of the convection calculations. The exact solution to this problem involves O(N sup 2) calculations, where N is the number of vortex blobs. A highly accurate method based on series expansion is introduced that is capable of reducing the computational requirements to O(N sup 1.4). Simulations for the case of steady, uniform incident flow were made for Reynolds numbers of 4,000 to 95,000. Comparison was made with experiments and finite difference calculations for the case of Reynolds number 9,500. The observed flow structures agreed well with the simulated flow. Simulations of the wave flows was also successful. The predicted forces agreed with experimental results within 10 to 20 percent for a range of incident wave lengths and amplitudes. The simulated vorticity fields provided an interpretation of the typical shapes seen in experimentally derive force curves, in which the various regions could be associated with the degree of vorticity formation and shedding.
Impedance measurement using a two-microphone, random-excitation method
NASA Technical Reports Server (NTRS)
Seybert, A. F.; Parrott, T. L.
1978-01-01
The feasibility of using a two-microphone, random-excitation technique for the measurement of acoustic impedance was studied. Equations were developed, including the effect of mean flow, which show that acoustic impedance is related to the pressure ratio and phase difference between two points in a duct carrying plane waves only. The impedances of a honeycomb ceramic specimen and a Helmholtz resonator were measured and compared with impedances obtained using the conventional standing-wave method. Agreement between the two methods was generally good. A sensitivity analysis was performed to pinpoint possible error sources and recommendations were made for future study. The two-microphone approach evaluated in this study appears to have some advantages over other impedance measuring techniques.
Identifying protein interaction subnetworks by a bagging Markov random field-based method
Chen, Li; Xuan, Jianhua; Riggins, Rebecca B.; Wang, Yue; Clarke, Robert
2013-01-01
Identification of differentially expressed subnetworks from protein–protein interaction (PPI) networks has become increasingly important to our global understanding of the molecular mechanisms that drive cancer. Several methods have been proposed for PPI subnetwork identification, but the dependency among network member genes is not explicitly considered, leaving many important hub genes largely unidentified. We present a new method, based on a bagging Markov random field (BMRF) framework, to improve subnetwork identification for mechanistic studies of breast cancer. The method follows a maximum a posteriori principle to form a novel network score that explicitly considers pairwise gene interactions in PPI networks, and it searches for subnetworks with maximal network scores. To improve their robustness across data sets, a bagging scheme based on bootstrapping samples is implemented to statistically select high confidence subnetworks. We first compared the BMRF-based method with existing methods on simulation data to demonstrate its improved performance. We then applied our method to breast cancer data to identify PPI subnetworks associated with breast cancer progression and/or tamoxifen resistance. The experimental results show that not only an improved prediction performance can be achieved by the BMRF approach when tested on independent data sets, but biologically meaningful subnetworks can also be revealed that are relevant to breast cancer and tamoxifen resistance. PMID:23161673
NASA Astrophysics Data System (ADS)
Mishra, S.; Schwab, Ch.; Šukys, J.
2016-05-01
We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.
Leyrat, Clémence; Caille, Agnès; Donner, Allan; Giraudeau, Bruno
2014-09-10
Despite randomization, selection bias may occur in cluster randomized trials. Classical multivariable regression usually allows for adjusting treatment effect estimates with unbalanced covariates. However, for binary outcomes with low incidence, such a method may fail because of separation problems. This simulation study focused on the performance of propensity score (PS)-based methods to estimate relative risks from cluster randomized trials with binary outcomes with low incidence. The results suggested that among the different approaches used (multivariable regression, direct adjustment on PS, inverse weighting on PS, and stratification on PS), only direct adjustment on the PS fully corrected the bias and moreover had the best statistical properties. PMID:24771662
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Skeletonization of the internal thoracic artery: a randomized comparison of harvesting methods.
Urso, Stefano; Alvarez, Luis; Sádaba, Rafael; Greco, Ernesto
2008-02-01
We performed a randomized study to compare internal thoracic artery (ITA) flow response to two harvesting methods used in the skeletonization procedure: ultrasonic scalpel and bipolar electrocautery. Sixty patients scheduled for CABG were randomized to receive either ultrasonically (n=30 patients) or electrocautery (n=30 patients) skeletonized ITAs. Intraoperative ITA graft mean flows were obtained with a transit-time flowmeter. ITA flows were evaluated at the beginning (Time 1) and at the end (Time 2) of the harvesting procedure. Post-cardiopulmonary bypass (CPB) flow measurement (Time 3) was obtained in the ITA grafts anastomosed to the left anterior descending artery. Intraoperative mean flow decreased significantly within ultrasonic group (Group U) and electrocautery group (Group E) at the end of the harvesting procedure (P<0.0001 in both cases). Within both groups the final mean flow measured on anastomosed ITAs (Time 3) was significantly higher than the beginning ITA flow value (Time 1). No statistical difference was noted comparing ITA flows between the two groups at any time of evaluation. Skeletonization harvesting of the ITA produces a modification of the mean flow. The quantity and the reversibility of this phenomenon, probably related to vasospasm, are independent from the energy source used in the skeletonization procedure. PMID:17998305
A recursive model-reduction method for approximate inference in Gaussian Markov random fields.
Johnson, Jason K; Willsky, Alan S
2008-01-01
This paper presents recursive cavity modeling--a principled, tractable approach to approximate, near-optimal inference for large Gauss-Markov random fields. The main idea is to subdivide the random field into smaller subfields, constructing cavity models which approximate these subfields. Each cavity model is a concise, yet faithful, model for the surface of one subfield sufficient for near-optimal inference in adjacent subfields. This basic idea leads to a tree-structured algorithm which recursively builds a hierarchy of cavity models during an "upward pass" and then builds a complementary set of blanket models during a reverse "downward pass." The marginal statistics of individual variables can then be approximated using their blanket models. Model thinning plays an important role, allowing us to develop thinned cavity and blanket models thereby providing tractable approximate inference. We develop a maximum-entropy approach that exploits certain tractable representations of Fisher information on thin chordal graphs. Given the resulting set of thinned cavity models, we also develop a fast preconditioner, which provides a simple iterative method to compute optimal estimates. Thus, our overall approach combines recursive inference, variational learning and iterative estimation. We demonstrate the accuracy and scalability of this approach in several challenging, large-scale remote sensing problems. PMID:18229805
A Novel Microaneurysms Detection Method Based on Local Applying of Markov Random Field.
Ganjee, Razieh; Azmi, Reza; Moghadam, Mohsen Ebrahimi
2016-03-01
Diabetic Retinopathy (DR) is one of the most common complications of long-term diabetes. It is a progressive disease and by damaging retina, it finally results in blindness of patients. Since Microaneurysms (MAs) appear as a first sign of DR in retina, early detection of this lesion is an essential step in automatic detection of DR. In this paper, a new MAs detection method is presented. The proposed approach consists of two main steps. In the first step, the MA candidates are detected based on local applying of Markov random field model (MRF). In the second step, these candidate regions are categorized to identify the correct MAs using 23 features based on shape, intensity and Gaussian distribution of MAs intensity. The proposed method is evaluated on DIARETDB1 which is a standard and publicly available database in this field. Evaluation of the proposed method on this database resulted in the average sensitivity of 0.82 for a confidence level of 75 as a ground truth. The results show that our method is able to detect the low contrast MAs with the background while its performance is still comparable to other state of the art approaches. PMID:26779642
Webster, Clayton G; Tran, Hoang A; Trenchea, Catalin S
2013-01-01
n this paper we show how stochastic collocation method (SCM) could fail to con- verge for nonlinear differential equations with random coefficients. First, we consider Navier-Stokes equation with uncertain viscosity and derive error estimates for stochastic collocation discretization. Our analysis gives some indicators on how the nonlinearity negatively affects the accuracy of the method. The stochastic collocation method is then applied to noisy Lorenz system. Simulation re- sults demonstrate that the solution of a nonlinear equation could be highly irregular on the random data and in such cases, stochastic collocation method cannot capture the correct solution.
A generalized genetic random field method for the genetic association analysis of sequencing data.
Li, Ming; He, Zihuai; Zhang, Min; Zhan, Xiaowei; Wei, Changshuai; Elston, Robert C; Lu, Qing
2014-04-01
With the advance of high-throughput sequencing technologies, it has become feasible to investigate the influence of the entire spectrum of sequencing variations on complex human diseases. Although association studies utilizing the new sequencing technologies hold great promise to unravel novel genetic variants, especially rare genetic variants that contribute to human diseases, the statistical analysis of high-dimensional sequencing data remains a challenge. Advanced analytical methods are in great need to facilitate high-dimensional sequencing data analyses. In this article, we propose a generalized genetic random field (GGRF) method for association analyses of sequencing data. Like other similarity-based methods (e.g., SIMreg and SKAT), the new method has the advantages of avoiding the need to specify thresholds for rare variants and allowing for testing multiple variants acting in different directions and magnitude of effects. The method is built on the generalized estimating equation framework and thus accommodates a variety of disease phenotypes (e.g., quantitative and binary phenotypes). Moreover, it has a nice asymptotic property, and can be applied to small-scale sequencing data without need for small-sample adjustment. Through simulations, we demonstrate that the proposed GGRF attains an improved or comparable power over a commonly used method, SKAT, under various disease scenarios, especially when rare variants play a significant role in disease etiology. We further illustrate GGRF with an application to a real dataset from the Dallas Heart Study. By using GGRF, we were able to detect the association of two candidate genes, ANGPTL3 and ANGPTL4, with serum triglyceride. PMID:24482034
2014-01-01
Background The inter-patient classification schema and the Association for the Advancement of Medical Instrumentation (AAMI) standards are important to the construction and evaluation of automated heartbeat classification systems. The majority of previously proposed methods that take the above two aspects into consideration use the same features and classification method to classify different classes of heartbeats. The performance of the classification system is often unsatisfactory with respect to the ventricular ectopic beat (VEB) and supraventricular ectopic beat (SVEB). Methods Based on the different characteristics of VEB and SVEB, a novel hierarchical heartbeat classification system was constructed. This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods. First, random projection and support vector machine (SVM) ensemble were used to detect VEB. Then, the ratio of the RR interval was compared to a predetermined threshold to detect SVEB. The optimal parameters for the classification models were selected on the training set and used in the independent testing set to assess the final performance of the classification system. Meanwhile, the effect of different lead configurations on the classification results was evaluated. Results Results showed that the performance of this classification system was notably superior to that of other methods. The VEB detection sensitivity was 93.9% with a positive predictive value of 90.9%, and the SVEB detection sensitivity was 91.1% with a positive predictive value of 42.2%. In addition, this classification process was relatively fast. Conclusions A hierarchical heartbeat classification system was proposed based on the inter-patient data division to detect VEB and SVEB. It demonstrated better classification performance than existing methods. It can be regarded as a promising system for detecting VEB and SVEB of unknown patients in
Adequate histologic sectioning of prostate needle biopsies.
Bostwick, David G; Kahane, Hillel
2013-08-01
No standard method exists for sampling prostate needle biopsies, although most reports claim to embed 3 cores per block and obtain 3 slices from each block. This study was undertaken to determine the extent of histologic sectioning necessary for optimal examination of prostate biopsies. We prospectively compared the impact on cancer yield of submitting 1 biopsy core per cassette (biopsies from January 2010) with 3 cores per cassette (biopsies from August 2010) from a large national reference laboratory. Between 6 and 12 slices were obtained with the former 1-core method, resulting in 3 to 6 slices being placed on each of 2 slides; for the latter 3-core method, a limit of 6 slices was obtained, resulting in 3 slices being place on each of 2 slides. A total of 6708 sets of 12 to 18 core biopsies were studied, including 3509 biopsy sets from the 1-biopsy-core-per-cassette group (January 2010) and 3199 biopsy sets from the 3-biopsy-cores-percassette group (August 2010). The yield of diagnoses was classified as benign, atypical small acinar proliferation, high-grade prostatic intraepithelial neoplasia, and cancer and was similar with the 2 methods: 46.2%, 8.2%, 4.5%, and 41.1% and 46.7%, 6.3%, 4.4%, and 42.6%, respectively (P = .02). Submission of 1 core or 3 cores per cassette had no effect on the yield of atypical small acinar proliferation, prostatic intraepithelial neoplasia, or cancer in prostate needle biopsies. Consequently, we recommend submission of 3 cores per cassette to minimize labor and cost of processing. PMID:23764163
NASA Astrophysics Data System (ADS)
Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei
2016-03-01
Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.
NASA Astrophysics Data System (ADS)
Diaz, P. M. A.; Feitosa, R. Q.; Sanches, I. D.; Costa, G. A. O. P.
2016-06-01
This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF) based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.
Calculation of the entropy of random coil polymers with the hypothetical scanning Monte Carlo method
NASA Astrophysics Data System (ADS)
White, Ronald P.; Meirovitch, Hagai
2005-12-01
Hypothetical scanning Monte Carlo (HSMC) is a method for calculating the absolute entropy S and free energy F from a given MC trajectory developed recently and applied to liquid argon, TIP3P water, and peptides. In this paper HSMC is extended to random coil polymers by applying it to self-avoiding walks on a square lattice—a simple but difficult model due to strong excluded volume interactions. With HSMC the probability of a given chain is obtained as a product of transition probabilities calculated for each bond by MC simulations and a counting formula. This probability is exact in the sense that it is based on all the interactions of the system and the only approximation is due to finite sampling. The method provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. HSMC is independent of existing techniques and thus constitutes an independent research tool. The HSMC results are compared to those obtained by other methods, and its application to complex lattice chain models is discussed; we emphasize its ability to treat any type of boundary conditions for which a reference state (with known free energy) might be difficult to define for a thermodynamic integration process. Finally, we stress that the capability of HSMC to extract the absolute entropy from a given sample is important for studying relaxation processes, such as protein folding.
Effects of Pilates method in elderly people: Systematic review of randomized controlled trials.
de Oliveira Francisco, Cristina; de Almeida Fagundes, Alessandra; Gorges, Bruna
2015-07-01
The Pilates method has been widely used in physical training and rehabilitation. Evidence regarding the effectiveness of this method in elderly people is limited. Six randomized controlled trials studies involving the use of the Pilates method for elderly people, published prior to December 2013, were selected from the databases PubMed, MEDLINE, Embase, Cochrane, Scielo and PEDro. Three articles suggested that Pilates produced improvements in balance. Two studies evaluated the adherence to Pilates programs. One study assessed Pilates' influence on cardio-metabolic parameters and another study evaluated changes in body composition. Strong evidence was found regarding beneficial effects of Pilates over static and dynamic balance in women. Nevertheless, evidence of balance improvement in both genders, changes in body composition in woman and adherence to Pilates programs were limited. Effects on cardio-metabolic parameters due to Pilates training presented inconclusive results. Pilates may be a useful tool in rehabilitation and prevention programs but more high quality studies are necessary to establish all the effects on elderly populations. PMID:26118523
White, Ronald P; Meirovitch, Hagai
2005-12-01
Hypothetical scanning Monte Carlo (HSMC) is a method for calculating the absolute entropy S and free energy F from a given MC trajectory developed recently and applied to liquid argon, TIP3P water, and peptides. In this paper HSMC is extended to random coil polymers by applying it to self-avoiding walks on a square lattice--a simple but difficult model due to strong excluded volume interactions. With HSMC the probability of a given chain is obtained as a product of transition probabilities calculated for each bond by MC simulations and a counting formula. This probability is exact in the sense that it is based on all the interactions of the system and the only approximation is due to finite sampling. The method provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. HSMC is independent of existing techniques and thus constitutes an independent research tool. The HSMC results are compared to those obtained by other methods, and its application to complex lattice chain models is discussed; we emphasize its ability to treat any type of boundary conditions for which a reference state (with known free energy) might be difficult to define for a thermodynamic integration process. Finally, we stress that the capability of HSMC to extract the absolute entropy from a given sample is important for studying relaxation processes, such as protein folding. PMID:16356071
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, Isaac E.
2006-01-01
A new stochastic method for locating the global minimum of a multidimensional function inside a rectangular hyperbox is presented. A sampling technique is employed that makes use of the procedure known as grammatical evolution. The method can be considered as a "genetic" modification of the Controlled Random Search procedure due to Price. The user may code the objective function either in C++ or in Fortran 77. We offer a comparison of the new method with others of similar structure, by presenting results of computational experiments on a set of test functions. Program summaryTitle of program: GenPrice Catalogue identifier:ADWP Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWP Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: the tool is designed to be portable in all systems running the GNU C++ compiler Installation: University of Ioannina, Greece Programming language used: GNU-C++, GNU-C, GNU Fortran-77 Memory required to execute with typical data: 200 KB No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of lines in distributed program, including test data, etc.:13 135 No. of bytes in distributed program, including test data, etc.: 78 512 Distribution format: tar. gz Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a "least squares" type of objective, one may encounter many local minima that do not correspond to solutions, i.e. minima with values
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.
2005-01-01
A new numerical simulation method using the finite element methodology (FEM) is presented to study electromagnetic scattering due to an arbitrarily shaped material body doped randomly with thin and short metallic wires. The FEM approach described in many standard text books is appropriately modified to account for the presence of thin and short metallic wires distributed randomly inside an arbitrarily shaped material body. Using this modified FEM approach, the electromagnetic scattering due to cylindrical, spherical material body doped randomly with thin metallic wires is studied.
Studier, F.W.
1995-04-18
Random and directed priming methods for determining nucleotide sequences by enzymatic sequencing techniques, using libraries of primers of lengths 8, 9 or 10 bases, are disclosed. These methods permit direct sequencing of nucleic acids as large as 45,000 base pairs or larger without the necessity for subcloning. Individual primers are used repeatedly to prime sequence reactions in many different nucleic acid molecules. Libraries containing as few as 10,000 octamers, 14,200 nonamers, or 44,000 decamers would have the capacity to determine the sequence of almost any cosmid DNA. Random priming with a fixed set of primers from a smaller library can also be used to initiate the sequencing of individual nucleic acid molecules, with the sequence being completed by directed priming with primers from the library. In contrast to random cloning techniques, a combined random and directed priming strategy is far more efficient. 2 figs.
Studier, F. William
1995-04-18
Random and directed priming methods for determining nucleotide sequences by enzymatic sequencing techniques, using libraries of primers of lengths 8, 9 or 10 bases, are disclosed. These methods permit direct sequencing of nucleic acids as large as 45,000 base pairs or larger without the necessity for subcloning. Individual primers are used repeatedly to prime sequence reactions in many different nucleic acid molecules. Libraries containing as few as 10,000 octamers, 14,200 nonamers, or 44,000 decamers would have the capacity to determine the sequence of almost any cosmid DNA. Random priming with a fixed set of primers from a smaller library can also be used to initiate the sequencing of individual nucleic acid molecules, with the sequence being completed by directed priming with primers from the library. In contrast to random cloning techniques, a combined random and directed priming strategy is far more efficient.
21 CFR 314.126 - Adequate and well-controlled studies.
Code of Federal Regulations, 2010 CFR
2010-04-01
... conducting clinical investigations of a drug is to distinguish the effect of a drug from other influences... recognized by the scientific community as the essentials of an adequate and well-controlled clinical... randomization and blinding of patients or investigators, or both. If the intent of the trial is to...
A Randomized Clinical Trial of the Health Evaluation and Referral Assistant (HERA): Research Methods
Boudreaux, Edwin D.; Abar, Beau; Baumann, Brigitte M.; Grissom, Grant
2013-01-01
The Health Evaluation and Referral Assistant (HERA) is a web-based program designed to facilitate screening, brief intervention, and referral to treatment (SBIRT) for tobacco, alcohol, and drug abuse. After the patient completes a computerized substance abuse assessment, the HERA produces a summary report with evidence-based recommended clinical actions for the healthcare provider (the Healthcare Provider Report) and a report for the patient (the Patient Feedback Report) that provides education regarding the consequences of use, personally tailored motivational messages, and a tailored substance abuse treatment referral list. For those who provide authorization, the HERA faxes the individual’s contact information to a substance abuse treatment provider matched to the individual’s substance use severity and personal characteristics, like insurance and location of residence (dynamic referral). This paper summarizes the methods used for a randomized controlled trial to evaluate the HERA’s efficacy in leading to increased treatment initiation and reduced substance use. The study was performed in four emergency departments. Individual patients were randomized into one of two conditions: the HERA or assessment only. A total of 4,269 patients were screened and 1,006 participants enrolled. The sample was comprised of 427 tobacco users, 212 risky alcohol users, and 367 illicit drug users. Fourty-two percent used more than one substance class. The enrolled sample was similar to the eligible patient population. The study should enhance understanding of whether computer-facilitated SBIRT can impact process of care variables, such as promoting substance abuse treatment initiation, as well as its effect on subsequent substance abuse and related outcomes. PMID:23665335
Mwatondo, Athman Juma; Ng'ang'a, Zipporah; Maina, Caroline; Makayotto, Lyndah; Mwangi, Moses; Njeru, Ian; Arvelo, Wences
2016-01-01
Introduction Kenya adopted the Integrated Disease Surveillance and Response (IDSR) strategy in 1998 to strengthen disease surveillance and epidemic response. However, the goal of weekly surveillance reporting among health facilities has not been achieved. We conducted a cross-sectional study to determine the prevalence of adequate reporting and factors associated with IDSR reporting among health facilities in one Kenyan County. Methods Health facilities (public and private) were enrolled using stratified random sampling from 348 facilities prioritized for routine surveillance reporting. Adequately-reporting facilities were defined as those which submitted >10 weekly reports during a twelve-week period and a poor reporting facilities were those which submitted <10 weekly reports. Multivariate logistic regression with backward selection was used to identify risk factors associated with adequate reporting. Results From September 2 through November 30, 2013, we enrolled 175 health facilities; 130(74%) were private and 45(26%) were public. Of the 175 health facilities, 77 (44%) facilities classified as adequate reporting and 98 (56%) were reporting poorly. Multivariate analysis identified three factors to be independently associated with weekly adequate reporting: having weekly reporting forms at visit (AOR19, 95% CI: 6-65], having posters showing IDSR functions (AOR8, 95% CI: 2-12) and having a designated surveillance focal person (AOR7, 95% CI: 2-20). Conclusion The majority of health facilities in Nairobi County were reporting poorly to IDSR and we recommend that the Ministry of Health provide all health facilities in Nairobi County with weekly reporting tools and offer specific trainings on IDSR which will help designate a focal surveillance person. PMID:27303581
NASA Astrophysics Data System (ADS)
Geiger, S.; Cortis, A.; Birkholzer, J. T.
2010-12-01
Solute transport in fractured porous media is typically "non-Fickian"; that is, it is characterized by early breakthrough and long tailing and by nonlinear growth of the Green function-centered second moment. This behavior is due to the effects of (1) multirate diffusion occurring between the highly permeable fracture network and the low-permeability rock matrix, (2) a wide range of advection rates in the fractures and, possibly, the matrix as well, and (3) a range of path lengths. As a consequence, prediction of solute transport processes at the macroscale represents a formidable challenge. Classical dual-porosity (or mobile-immobile) approaches in conjunction with an advection-dispersion equation and macroscopic dispersivity commonly fail to predict breakthrough of fractured porous media accurately. It was recently demonstrated that the continuous time random walk (CTRW) method can be used as a generalized upscaling approach. Here we extend this work and use results from high-resolution finite element-finite volume-based simulations of solute transport in an outcrop analogue of a naturally fractured reservoir to calibrate the CTRW method by extracting a distribution of retention times. This procedure allows us to predict breakthrough at other model locations accurately and to gain significant insight into the nature of the fracture-matrix interaction in naturally fractured porous reservoirs with geologically realistic fracture geometries.
Geiger, S.; Cortis, A.; Birkholzer, J.T.
2010-04-01
Solute transport in fractured porous media is typically 'non-Fickian'; that is, it is characterized by early breakthrough and long tailing and by nonlinear growth of the Green function-centered second moment. This behavior is due to the effects of (1) multirate diffusion occurring between the highly permeable fracture network and the low-permeability rock matrix, (2) a wide range of advection rates in the fractures and, possibly, the matrix as well, and (3) a range of path lengths. As a consequence, prediction of solute transport processes at the macroscale represents a formidable challenge. Classical dual-porosity (or mobile-immobile) approaches in conjunction with an advection-dispersion equation and macroscopic dispersivity commonly fail to predict breakthrough of fractured porous media accurately. It was recently demonstrated that the continuous time random walk (CTRW) method can be used as a generalized upscaling approach. Here we extend this work and use results from high-resolution finite element-finite volume-based simulations of solute transport in an outcrop analogue of a naturally fractured reservoir to calibrate the CTRW method by extracting a distribution of retention times. This procedure allows us to predict breakthrough at other model locations accurately and to gain significant insight into the nature of the fracture-matrix interaction in naturally fractured porous reservoirs with geologically realistic fracture geometries.
Switching methods in magnetic random access memory for low power applications
NASA Astrophysics Data System (ADS)
Guchang, Han; Jiancheng, Huang; Cheow Hin, Sim; Tran, Michael; Sze Ter, Lim
2015-06-01
Effect of saturation magnetization (Ms) of the free layer (FL) on the switching current is analyzed for spin transfer torque (STT) magnetic random access memory (MRAM). For in-plane FL, critical switching current (Ic0) decreases as Ms decreases. However, reduction in Ms also results in a low thermal stability factor (Δ), which must be compensated through increasing shape anisotropy, thus limiting scalability. For perpendicular FL, Ic0 reduction by using low-Ms materials is actually at the expense of data retention. To save energy consumed by STT current, two electric field (EF) controlled switching methods are proposed. Our simulation results show that elliptical FL can be switched by an EF pulse with a suitable width. However, it is difficult to implement this type of switching in real MRAM devices due to the distribution of the required switching pulse widths. A reliable switching method is to use an Oersted field guided switching. Our simulation and experimental results show that the bi-directional magnetization switching could be realized by an EF with an external field as low as ±5 Oe if the offset field could be removed.
Adipose Tissue - Adequate, Accessible Regenerative Material.
Kolaparthy, Lakshmi Kanth; Sanivarapu, Sahitya; Moogla, Srinivas; Kutcham, Rupa Sruthi
2015-11-01
The potential use of stem cell based therapies for the repair and regeneration of various tissues offers a paradigm shift that may provide alternative therapeutic solutions for a number of diseases. The use of either embryonic stem cells (ESCs) or induced pluripotent stem cells in clinical situations is limited due to cell regulations and to technical and ethical considerations involved in genetic manipulation of human ESCs, even though these cells are highly beneficial. Mesenchymal stem cells seen to be an ideal population of stem cells in particular, Adipose derived stem cells (ASCs) which can be obtained in large number and easily harvested from adipose tissue. It is ubiquitously available and has several advantages compared to other sources as easily accessible in large quantities with minimal invasive harvesting procedure, and isolation of adipose derived mesenchymal stem cells yield a high amount of stem cells which is essential for stem cell based therapies and tissue engineering. Recently, periodontal tissue regeneration using ASCs has been examined in some animal models. This method has potential in the regeneration of functional periodontal tissues because various secreted growth factors from ASCs might not only promote the regeneration of periodontal tissues but also encourage neovascularization of the damaged tissues. This review summarizes the sources, isolation and characteristics of adipose derived stem cells and its potential role in periodontal regeneration is discussed. PMID:26634060
Adipose Tissue - Adequate, Accessible Regenerative Material
Kolaparthy, Lakshmi Kanth.; Sanivarapu, Sahitya; Moogla, Srinivas; Kutcham, Rupa Sruthi
2015-01-01
The potential use of stem cell based therapies for the repair and regeneration of various tissues offers a paradigm shift that may provide alternative therapeutic solutions for a number of diseases. The use of either embryonic stem cells (ESCs) or induced pluripotent stem cells in clinical situations is limited due to cell regulations and to technical and ethical considerations involved in genetic manipulation of human ESCs, even though these cells are highly beneficial. Mesenchymal stem cells seen to be an ideal population of stem cells in particular, Adipose derived stem cells (ASCs) which can be obtained in large number and easily harvested from adipose tissue. It is ubiquitously available and has several advantages compared to other sources as easily accessible in large quantities with minimal invasive harvesting procedure, and isolation of adipose derived mesenchymal stem cells yield a high amount of stem cells which is essential for stem cell based therapies and tissue engineering. Recently, periodontal tissue regeneration using ASCs has been examined in some animal models. This method has potential in the regeneration of functional periodontal tissues because various secreted growth factors from ASCs might not only promote the regeneration of periodontal tissues but also encourage neovascularization of the damaged tissues. This review summarizes the sources, isolation and characteristics of adipose derived stem cells and its potential role in periodontal regeneration is discussed. PMID:26634060
21 CFR 201.5 - Drugs; adequate directions for use.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Drugs; adequate directions for use. 201.5 Section...) DRUGS: GENERAL LABELING General Labeling Provisions § 201.5 Drugs; adequate directions for use. Adequate directions for use means directions under which the layman can use a drug safely and for the purposes...
21 CFR 201.5 - Drugs; adequate directions for use.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 4 2011-04-01 2011-04-01 false Drugs; adequate directions for use. 201.5 Section...) DRUGS: GENERAL LABELING General Labeling Provisions § 201.5 Drugs; adequate directions for use. Adequate directions for use means directions under which the layman can use a drug safely and for the purposes...
4 CFR 200.14 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4 Accounts 1 2010-01-01 2010-01-01 false Responsibility for maintaining adequate safeguards. 200.14 Section 200.14 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PRIVACY ACT OF 1974 § 200.14 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and security...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and security...
4 CFR 200.14 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 4 Accounts 1 2011-01-01 2011-01-01 false Responsibility for maintaining adequate safeguards. 200....14 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and security safeguards to prevent unauthorized disclosure...
21 CFR 314.126 - Adequate and well-controlled studies.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-evident (general anesthetics, drug metabolism). (3) The method of selection of subjects provides adequate... respect to pertinent variables such as age, sex, severity of disease, duration of disease, and use of... 21 Food and Drugs 5 2011-04-01 2011-04-01 false Adequate and well-controlled studies....
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Donnell, Deborah; Komárek, Arnošt; Omelka, Marek; Mullis, Caroline E.; Szekeres, Greg; Piwowar-Manning, Estelle; Fiamma, Agnes; Gray, Ronald H.; Lutalo, Tom; Morrison, Charles S.; Salata, Robert A.; Chipato, Tsungai; Celum, Connie; Kahle, Erin M.; Taha, Taha E.; Kumwenda, Newton I.; Karim, Quarraisha Abdool; Naranbhai, Vivek; Lingappa, Jairam R.; Sweat, Michael D.; Coates, Thomas; Eshleman, Susan H.
2013-01-01
Background Accurate methods of HIV incidence determination are critically needed to monitor the epidemic and determine the population level impact of prevention trials. One such trial, Project Accept, a Phase III, community-randomized trial, evaluated the impact of enhanced, community-based voluntary counseling and testing on population-level HIV incidence. The primary endpoint of the trial was based on a single, cross-sectional, post-intervention HIV incidence assessment. Methods and Findings Test performance of HIV incidence determination was evaluated for 403 multi-assay algorithms [MAAs] that included the BED capture immunoassay [BED-CEIA] alone, an avidity assay alone, and combinations of these assays at different cutoff values with and without CD4 and viral load testing on samples from seven African cohorts (5,325 samples from 3,436 individuals with known duration of HIV infection [1 month to >10 years]). The mean window period (average time individuals appear positive for a given algorithm) and performance in estimating an incidence estimate (in terms of bias and variance) of these MAAs were evaluated in three simulated epidemic scenarios (stable, emerging and waning). The power of different test methods to detect a 35% reduction in incidence in the matched communities of Project Accept was also assessed. A MAA was identified that included BED-CEIA, the avidity assay, CD4 cell count, and viral load that had a window period of 259 days, accurately estimated HIV incidence in all three epidemic settings and provided sufficient power to detect an intervention effect in Project Accept. Conclusions In a Southern African setting, HIV incidence estimates and intervention effects can be accurately estimated from cross-sectional surveys using a MAA. The improved accuracy in cross-sectional incidence testing that a MAA provides is a powerful tool for HIV surveillance and program evaluation. PMID:24236054
Purchasing a cycle helmet: are retailers providing adequate advice?
Plumridge, E.; McCool, J.; Chetwynd, J.; Langley, J. D.
1996-01-01
OBJECTIVES: The aim of this study was to examine the selling of cycle helmets in retail stores with particular reference to the adequacy of advice offered about the fit and securing of helmets. METHODS: All 55 retail outlets selling cycle helmets in Christchurch, New Zealand were studied by participant observation. A research entered each store as a prospective customer and requested assistance to purchase a helmet. She took detailed field notes of the ensuing encounter and these were subsequently transcribed, coded, and analysed. RESULTS: Adequate advice for helmet purchase was given in less than half of the stores. In general the sales assistants in specialist cycle shops were better informed and gave more adequate advice than those in department stores. Those who have good advice also tended to be more good advice also tended to be more active in helping with fitting the helmet. Knowledge about safety standards was apparent in one third of sales assistants. Few stores displayed information for customers about the correct fit of cycle helmets. CONCLUSIONS: These findings suggest that the advice and assistance being given to ensure that cycle helmets fit properly is often inadequate and thus the helmets may fail to fulfil their purpose in preventing injury. Consultation between retailers and policy makers is a necessary first step to improving this situation. PMID:9346053
Miyazaki, Kentaro
2011-01-01
MEGAWHOP allows for the cloning of DNA fragments into a vector and is used for conventional restriction digestion/ligation-based procedures. In MEGAWHOP, the DNA fragment to be cloned is used as a set of complementary primers that replace a homologous region in a template vector through whole-plasmid PCR. After synthesis of a nicked circular plasmid, the mixture is treated with DpnI, a dam-methylated DNA-specific restriction enzyme, to digest the template plasmid. The DpnI-treated mixture is then introduced into competent Escherichia coli cells to yield plasmids carrying replaced insert fragments. Plasmids produced by the MEGAWHOP method are virtually free of contamination by species without any inserts or with multiple inserts, and also the parent. Because the fragment is usually long enough to not interfere with hybridization to the template, various types of fragments can be used with mutations at any site (either known or unknown, random, or specific). By using fragments having homologous sequences at the ends (e.g., adaptor sequence), MEGAWHOP can also be used to recombine nonhomologous sequences mediated by the adaptors, allowing rapid creation of novel constructs and chimeric genes. PMID:21601687
NASA Astrophysics Data System (ADS)
Michaľčonok, German; Kalinová, Michaela Horalová; Németh, Martin
2014-12-01
The aim of this paper is to present the possibilities of applying data mining techniques to the problem of analysis of structural relationships in the system of stationary random processes. In this paper, we will approach the area of the random processes, present the process of structural analysis and select suitable circuit data mining methods applicable to the area of structural analysis. We will propose the methodology for the structural analysis in the system of stationary stochastic processes using data mining methods for active experimental approach, based on the theoretical basis.
MOMENT-BASED METHOD FOR RANDOM EFFECTS SELECTION IN LINEAR MIXED MODELS
Ahn, Mihye; Lu, Wenbin
2012-01-01
The selection of random effects in linear mixed models is an important yet challenging problem in practice. We propose a robust and unified framework for automatically selecting random effects and estimating covariance components in linear mixed models. A moment-based loss function is first constructed for estimating the covariance matrix of random effects. Two types of shrinkage penalties, a hard thresholding operator and a new sandwich-type soft-thresholding penalty, are then imposed for sparse estimation and random effects selection. Compared with existing approaches, the new procedure does not require any distributional assumption on the random effects and error terms. We establish the asymptotic properties of the resulting estimator in terms of its consistency in both random effects selection and variance component estimation. Optimization strategies are suggested to tackle the computational challenges involved in estimating the sparse variance-covariance matrix. Furthermore, we extend the procedure to incorporate the selection of fixed effects as well. Numerical results show promising performance of the new approach in selecting both random and fixed effects and, consequently, improving the efficiency of estimating model parameters. Finally, we apply the approach to a data set from the Amsterdam Growth and Health study. PMID:23105913
Turbulence parameterizations for the random displacement method (RDM) version of ADPIC
Nasstrom, J.S.
1995-05-01
This document describes the algorithms that are used in the new random displacement method (RDM) option in the ADPIC model to parameterize atmospheric boundary layer turbulence through an eddy diffusivity, K. Both the new RDM version and previous gradient version of ADPIC use eddy diffusivities, and, as before, several parameterization options are available. The options used in the RDM are similar to the options for the existing Gradient method in ADPIC, but with some changes. Preferred parameterizations are based on boundary layer turbulence scaling parameters and measured turbulent velocity statistics. Simpler parameterizations, based solely on Pasquill stability class, are also available. When eddy diffusivities are based on boundary layer turbulence scaling parameters (i.e., u, h, z and L ), {open_quotes}turbulence parameterization{close_quotes} is an appropriate term. In other cases, this term is used loosely to describe {open_quotes}sigma curves{close_quotes}. These are semi-empirical relationships between the standard deviations, {sigma}z(x) and {sigma}y(x), of concentration from a point source and downwind distance. Separate sigma curves are used for each of six Pasquill stability classes, which are used to categorize the diffusive properties of the atmospheric surface layer. Consequently, sigma curves are more than parameterizations of turbulence since they also prescribe the final concentration distribution (for a point source) given a Pasquill stability class. In the ADPIC model, sigma curves can be used to calculate the eddy diffusivities, K{sub Z} and K{sub H}. Thus, they can be used to {open_quotes}back out{close_quotes} parameterizations for K which are consistent with the dispersion associated with the particular sigma curve. This results in eddy diffusivities which are spatially homogeneous, but travel time dependent.
NASA Astrophysics Data System (ADS)
Kissel, Glen J.
2009-08-01
In the one-dimensional optical analog to Anderson localization, a periodically layered medium has one or more parameters randomly disordered. Such a medium can be modeled by an infinite product of 2x2 random transfer matrices with the upper Lyapunov exponent of the matrix product identified as the localization factor (inverse localization length). Furstenberg's integral formula for the Lyapunov exponent requires integration with respect to both the probability measure of the random matrices and the invariant probability measure of the direction of the vector propagated by the random matrix product. This invariant measure is difficult to find analytically, so one of several numerical techniques must be used in its calculation. Here, we focus on one of those techniques, Ulam's method, which sets up a sparse matrix of the probabilities that an entire interval of possible directions will be transferred to some other interval of directions. The left eigenvector of this sparse matrix forms the estimated invariant measure. While Ulam's method is shown to produce results as accurate as others, it suffers from long computation times. The Ulam method, along with other approaches, is demonstrated on a random Fibonacci sequence having a known answer, and on a quarter-wave stack model with discrete disorder in layer thickness.
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope’s Random Error
Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia
2015-01-01
Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods—quantile, empirical characteristic function (ECF) and logarithmic moment method—are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope’s random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope’s random error. PMID:26230698
Are PPS payments adequate? Issues for updating and assessing rates
Sheingold, Steven H.; Richter, Elizabeth
1992-01-01
Declining operating margins under Medicare's prospective payment system (PPS) have focused attention on the adequacy of payment rates. The question of whether annual updates to the rates have been too low or cost increases too high has become important. In this article we discuss issues relevant to updating PPS rates and judging their adequacy. We describe a modification to the current framework for recommending annual update factors. This framework is then used to retrospectively assess PPS payment and cost growth since 1985. The preliminary results suggest that current rates are more than adequate to support the cost of efficient care. Also discussed are why using financial margins to evaluate rates is problematic and alternative methods that might be employed. PMID:10127450
ERIC Educational Resources Information Center
Kullgren, Jeffrey T.; Harkins, Kristin A.; Bellamy, Scarlett L.; Gonzales, Amy; Tao, Yuanyuan; Zhu, Jingsan; Volpp, Kevin G.; Asch, David A.; Heisler, Michele; Karlawish, Jason
2014-01-01
Background: Financial incentives and peer networks could be delivered through eHealth technologies to encourage older adults to walk more. Methods: We conducted a 24-week randomized trial in which 92 older adults with a computer and Internet access received a pedometer, daily walking goals, and weekly feedback on goal achievement. Participants…
Liu, Xueqi; Wang, Hong-Wei
2011-01-01
of each single particle. There are several methods to assign the view for each particle, including the angular reconstitution1 and random conical tilt (RCT) method2. In this protocol, we describe our practice in getting the 3D reconstruction of yeast exosome complex using negative staining EM and RCT. It should be noted that our protocol of electron microscopy and image processing follows the basic principle of RCT but is not the only way to perform the method. We first describe how to embed the protein sample into a layer of Uranyl-Formate with a thickness comparable to the protein size, using a holey carbon grid covered with a layer of continuous thin carbon film. Then the specimen is inserted into a transmission electron microscope to collect untilted (0-degree) and tilted (55-degree) pairs of micrographs that will be used later for processing and obtaining an initial 3D model of the yeast exosome. To this end, we perform RCT and then refine the initial 3D model by using the projection matching refinement method3. PMID:21490573
Webster, Clayton; Tempone, Raul; Nobile, Fabio
2007-12-01
This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.
2014-01-01
Background Excessive body weight, low physical activity and excessive sedentary time in youth are major public health concerns. A new generation of video games, the ones that require physical activity to play the games –i.e. active games- may be a promising alternative to traditional non-active games to promote physical activity and reduce sedentary behaviors in youth. The aim of this manuscript is to describe the design of a study evaluating the effects of a family oriented active game intervention, incorporating several motivational elements, on anthropometrics and health behaviors in adolescents. Methods/Design The study is a randomized controlled trial (RCT), with non-active gaming adolescents aged 12 – 16 years old randomly allocated to a ten month intervention (receiving active games, as well as an encouragement to play) or a waiting-list control group (receiving active games after the intervention period). Primary outcomes are adolescents’ measured BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds. Secondary outcomes are adolescents’ self-reported time spent playing active and non-active games, other sedentary activities and consumption of sugar-sweetened beverages. In addition, a process evaluation is conducted, assessing the sustainability of the active games, enjoyment, perceived competence, perceived barriers for active game play, game context, injuries from active game play, activity replacement and intention to continue playing the active games. Discussion This is the first adequately powered RCT including normal weight adolescents, evaluating a reasonably long period of provision of and exposure to active games. Next, strong elements are the incorporating motivational elements for active game play and a comprehensive process evaluation. This trial will provide evidence regarding the potential contribution of active games in prevention of excessive weight gain in
Zheng, Hui; Tian, Xiao-ping; Li, Ying; Liang, Fan-rong; Yu, Shu-guang; Liu, Xu-guang; Tang, Yong; Yang, Xu-guang; Yan, Jie; Sun, Guo-jie; Chang, Xiao-rong; Zhang, Hong-xing; Ma, Ting-ting; Yu, Shu-yuan
2009-01-01
Background Acupuncture is widely used in China to treat functional dyspepsia (FD). However, its effectiveness in the treatment of FD, and whether FD-specific acupoints exist, are controversial. So this study aims to determine if acupuncture is an effective treatment for FD and if acupoint specificity exists according to traditional acupuncture meridians and acupoint theories. Design This multicenter randomized controlled trial will include four acupoint treatment groups, one non-acupoint control group and one drug (positive control) group. The four acupoint treatment groups will focus on: (1) specific acupoints of the stomach meridian; (2) non-specific acupoints of the stomach meridian; (3) specific acupoints of alarm and transport points; and (4) acupoints of the gallbladder meridian. These four groups of acupoints are thought to differ in terms of clinical efficacy, according to traditional acupuncture meridians and acupoint theories. A total of 120 FD patients will be included in each group. Each patient will receive 20 sessions of acupuncture treatment over 4 weeks. The trial will be conducted in eight hospitals located in three centers of China. The primary outcomes in this trial will include differences in Nepean Dyspepsia Index scores and differences in the Symptom Index of Dyspepsia before randomization, 2 weeks and 4 weeks after randomization, and 1 month and 3 months after completing treatment. Discussion The important features of this trial include the randomization procedures (controlled by a central randomization system), a standardized protocol of acupuncture manipulation, and the fact that this is the first multicenter randomized trial of FD and acupuncture to be performed in China. The results of this trial will determine whether acupuncture is an effective treatment for FD and whether using different acupoints or different meridians leads to differences in clinical efficacy. Trial registration number Clinical Trials.gov Identifier: NCT00599677
Ezaki, Naofumi; Watanabe, Yoshifumi; Mori, Hideharu
2015-10-27
As surfactants for preparation of nonaqueous microcapsule dispersions by the emulsion solvent evaporation method, three copolymers composed of stearyl methacrylate (SMA) and glycidyl methacrylate (GMA) with different monomer sequences (i.e., random, block, and block-random) were synthesized by reversible addition-fragmentation chain transfer (RAFT) polymerization. Despite having the same comonomer composition, the copolymers exhibited different functionality as surfactants for creating emulsions with respective dispersed and continuous phases consisting of methanol and isoparaffin solvent. The optimal monomer sequence for the surfactant was determined based on the droplet sizes and the stabilities of the emulsions created using these copolymers. The block-random copolymer led to an emulsion with better stability than obtained using the random copolymer and a smaller droplet size than achieved with the block copolymer. Modification of the epoxy group of the GMA unit by diethanolamine (DEA) further decreased the droplet size, leading to higher stability of the emulsion. The DEA-modified block-random copolymer gave rise to nonaqueous microcapsule dispersions after evaporation of methanol from the emulsions containing colored dyes in their dispersed phases. These dispersions exhibited high stability, and the particle sizes were small enough for application to the inkjet printing process. PMID:26421355
7 CFR 4290.200 - Adequate capital for RBICs.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Adequate capital for RBICs. 4290.200 Section 4290.200 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND... Qualifications for the RBIC Program Capitalizing A Rbic § 4290.200 Adequate capital for RBICs. You must meet...
13 CFR 107.200 - Adequate capital for Licensees.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Adequate capital for Licensees... INVESTMENT COMPANIES Qualifying for an SBIC License Capitalizing An Sbic § 107.200 Adequate capital for... Licensee, and to receive Leverage. (a) You must have enough Regulatory Capital to provide...
13 CFR 107.200 - Adequate capital for Licensees.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Adequate capital for Licensees... INVESTMENT COMPANIES Qualifying for an SBIC License Capitalizing An Sbic § 107.200 Adequate capital for... Licensee, and to receive Leverage. (a) You must have enough Regulatory Capital to provide...
7 CFR 4290.200 - Adequate capital for RBICs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Adequate capital for RBICs. 4290.200 Section 4290.200 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND... Qualifications for the RBIC Program Capitalizing A Rbic § 4290.200 Adequate capital for RBICs. You must meet...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Adequate file search. 716.25 Section 716.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...
10 CFR 503.35 - Inability to obtain adequate capital.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Inability to obtain adequate capital. 503.35 Section 503.35 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS NEW FACILITIES Permanent Exemptions for New Facilities § 503.35 Inability to obtain adequate capital. (a) Eligibility. Section 212(a)(1)(D)...
10 CFR 503.35 - Inability to obtain adequate capital.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Inability to obtain adequate capital. 503.35 Section 503.35 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS NEW FACILITIES Permanent Exemptions for New Facilities § 503.35 Inability to obtain adequate capital. (a) Eligibility. Section 212(a)(1)(D)...
15 CFR 970.404 - Adequate exploration plan.
Code of Federal Regulations, 2011 CFR
2011-01-01
... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Certification of Applications § 970.404 Adequate exploration plan. Before he may certify an application, the Administrator must find... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Adequate exploration plan....
15 CFR 970.404 - Adequate exploration plan.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Certification of Applications § 970.404 Adequate exploration plan. Before he may certify an application, the Administrator must find... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Adequate exploration plan....
"Something Adequate"? In Memoriam Seamus Heaney, Sister Quinlan, Nirbhaya
ERIC Educational Resources Information Center
Parker, Jan
2014-01-01
Seamus Heaney talked of poetry's responsibility to represent the "bloody miracle", the "terrible beauty" of atrocity; to create "something adequate". This article asks, what is adequate to the burning and eating of a nun and the murderous gang rape and evisceration of a medical student? It considers Njabulo…
Oxide Defect Engineering Methods for Valence Change (VCM) Resistive Random Access Memories
NASA Astrophysics Data System (ADS)
Capulong, Jihan O.
Electrical switching requirements for resistive random access memory (ReRAM) devices are multifaceted, based on device application. Thus, it is important to obtain an understanding of these switching properties and how they relate to the oxygen vacancy concentration and oxygen vacancy defects. Oxygen vacancy defects in the switching oxide of valence-change-based ReRAM (VCM ReRAM) play a significant role in device switching properties. Oxygen vacancies facilitate resistive switching as they form the conductive filament that changes the resistance state of the device. This dissertation will present two methods of modulating the defect concentration in VCM ReRAM composed of Pt/HfOx/Ti stack: 1) rapid thermal annealing (RTA) in Ar using different temperatures, and 2) doping using ion implantation under different dose levels. Metrology techniques such as x-ray diffractometry (XRD), x-ray photoelectron spectroscopy (XPS), and photoluminescence (PL) spectroscopy were utilized to characterize the HfOx switching oxide, which provided insight on the material properties and oxygen vacancy concentration in the oxide that was used to explain the changes in the electrical properties of the ReRAM devices. The resulting impact on the resistive switching characteristics of the devices, such as the forming voltage, set and reset threshold voltages, ON and OFF resistances, resistance ratio, and switching dispersion or uniformity were explored and summarized. Annealing in Ar showed significant impact on the forming voltage, with as much as 45% (from 22V to 12 V) of improvement, as the annealing temperature was increased. However, drawbacks of a higher oxide leakage and worse switching uniformity were seen with increasing annealing temperature. Meanwhile, doping the oxide by ion implantation showed significant effects on the resistive switching characteristics. Ta doping modulated the following switching properties with increasing dose: a) the reduction of the forming voltage, and Vset
Technology Transfer Automated Retrieval System (TEKTRAN)
Using computer-generated data calculated with known amounts of random error (E = 1, 5 & 10%) associated with calculated qPCR cycle number (C ) at four jth 1:10 dilutions, we found that the “efficiency” (eff) associated with each population distribution of n = 10,000 measurements varied from 0.95 to ...
Increasing the Degrees of Freedom in Future Group Randomized Trials: The "df*" Method Revisited
ERIC Educational Resources Information Center
Murray, David M.; Blitstein, Jonathan L.; Hannan, Peter J.; Shadish, William R.
2012-01-01
Background: This article revisits an article published in Evaluation Review in 2005 on sample size estimation and power analysis for group-randomized trials. With help from a careful reader, we learned of an important error in the spreadsheet used to perform the calculations and generate the results presented in that article. As we studied the…
Shibukawa, Atsushi; Okamoto, Atsushi; Takabayashi, Masanori; Tomita, Akihisa
2014-02-24
We propose a spatial cross modulation method using a random diffuser and a phase-only spatial light modulator (SLM), by which arbitrary complex-amplitude fields can be generated with higher spatial resolution and diffraction efficiency than off-axis and double-phase computer-generated holograms. Our method encodes the original complex object as a phase-only diffusion image by scattering the complex object using a random diffuser. In addition, all incoming light to the SLM is consumed for a single diffraction order, making a diffraction efficiency of more than 90% possible. This method can be applied for holographic data storage, three-dimensional displays, and other such applications. PMID:24663718
Adequate peritoneal dialysis: theoretical model and patient treatment.
Tast, C
1998-01-01
The objective of this study was to evaluate the relationship between adequate PD with sufficient weekly Kt/V (2.0) and Creatinine clearance (CCR) (60l) and necessary daily dialysate volume. This recommended parameter was the result of a recent multi-centre study (CANUSA). For this there were 40 patients in our hospital examined and compared in 1996, who carried out PD for at least 8 weeks and up to 6 years. These goals (CANUSA) are easily attainable in the early treatment of many individuals with a low body surface area (BSA). With higher BSA or missing RRF (Residual Renal Function) the daily dose of dialysis must be adjusted. We found it difficult to obtain the recommended parameters and tried to find a solution to this problem. The simplest method is to increase the volume or exchange rate. The most expensive method is to change from CAPD to APD with the possibility of higher volume or exchange rates. Selection of therapy must take into consideration: 1. patient preference, 2. body mass, 3. peritoneal transport rates, 4. ability to perform therapy, 5. cost of therapy and 6. risk of peritonitis. With this information in mind, an individual prescription can be formulated and matched to the appropriate modality of PD. PMID:10392062
Wu, Sheng; Crespi, Catherine M; Wong, Weng Kee
2012-09-01
The intraclass correlation coefficient (ICC) is a fundamental parameter of interest in cluster randomized trials as it can greatly affect statistical power. We compare common methods of estimating the ICC in cluster randomized trials with binary outcomes, with a specific focus on their application to community-based cancer prevention trials with primary outcome of self-reported cancer screening. Using three real data sets from cancer screening intervention trials with different numbers and types of clusters and cluster sizes, we obtained point estimates and 95% confidence intervals for the ICC using five methods: the analysis of variance estimator, the Fleiss-Cuzick estimator, the Pearson estimator, an estimator based on generalized estimating equations and an estimator from a random intercept logistic regression model. We compared estimates of the ICC for the overall sample and by study condition. Our results show that ICC estimates from different methods can be quite different, although confidence intervals generally overlap. The ICC varied substantially by study condition in two studies, suggesting that the common practice of assuming a common ICC across all clusters in the trial is questionable. A simulation study confirmed pitfalls of erroneously assuming a common ICC. Investigators should consider using sample size and analysis methods that allow the ICC to vary by study condition. PMID:22627076
Jin, Shi; Xiu, Dongbin; Zhu, Xueyu
2015-05-15
In this paper we develop a set of stochastic numerical schemes for hyperbolic and transport equations with diffusive scalings and subject to random inputs. The schemes are asymptotic preserving (AP), in the sense that they preserve the diffusive limits of the equations in discrete setting, without requiring excessive refinement of the discretization. Our stochastic AP schemes are extensions of the well-developed deterministic AP schemes. To handle the random inputs, we employ generalized polynomial chaos (gPC) expansion and combine it with stochastic Galerkin procedure. We apply the gPC Galerkin scheme to a set of representative hyperbolic and transport equations and establish the AP property in the stochastic setting. We then provide several numerical examples to illustrate the accuracy and effectiveness of the stochastic AP schemes.
Method for removal of random noise in eddy-current testing system
Levy, Arthur J.
1995-01-01
Eddy-current response voltages, generated during inspection of metallic structures for anomalies, are often replete with noise. Therefore, analysis of the inspection data and results is difficult or near impossible, resulting in inconsistent or unreliable evaluation of the structure. This invention processes the eddy-current response voltage, removing the effect of random noise, to allow proper identification of anomalies within and associated with the structure.
Method for Evaluation of Outage Probability on Random Access Channel in Mobile Communication Systems
NASA Astrophysics Data System (ADS)
Kollár, Martin
2012-05-01
In order to access the cell in all mobile communication technologies a so called random-access procedure is used. For example in GSM this is represented by sending the CHANNEL REQUEST message from Mobile Station (MS) to Base Transceiver Station (BTS) which is consequently forwarded as an CHANNEL REQUIRED message to the Base Station Controller (BSC). If the BTS decodes some noise on the Random Access Channel (RACH) as random access by mistake (so- called ‘phantom RACH') then it is a question of pure coincidence which èstablishment cause’ the BTS thinks to have recognized. A typical invalid channel access request or phantom RACH is characterized by an IMMEDIATE ASSIGNMENT procedure (assignment of an SDCCH or TCH) which is not followed by sending an ESTABLISH INDICATION from MS to BTS. In this paper a mathematical model for evaluation of the Power RACH Busy Threshold (RACHBT) in order to guaranty in advance determined outage probability on RACH is described and discussed as well. It focuses on Global System for Mobile Communications (GSM) however the obtained results can be generalized on remaining mobile technologies (
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
Dimova, Rositsa B; Allison, David B
2016-01-01
The conclusions of Cassani et al. in the January 2015 issue of Nutrition Journal (doi: 10.1186/1475-2891-14-5 ) cannot be substantiated by the analysis reported nor by the data themselves. The authors ascribed the observed decrease in inflammatory markers to the components of flaxseed and based their conclusions on within-group comparisons made between the final and the baseline measurements separately in each arm of the randomized controlled trial. However, this is an improper approach and the conclusions of the paper are invalid. A correct analysis of the data shows no such effects. PMID:27265269
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2008-01-01
This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…
Durán Pacheco, Gonzalo; Hattendorf, Jan; Colford, John M; Mäusezahl, Daniel; Smith, Thomas
2009-10-30
Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community-intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster-size imbalance. The compared methods are: (i) the two-sample t-test of cluster-level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model-based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes-HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random-effects estimation. GLMM and Bayes-HM performed better in general with Bayes-HM producing less dispersed results for random-effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster-level t-test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community-intervention trial on Solar Water Disinfection in rural Bolivia. PMID:19672840
On Adequate Comparisons of Antenna Phase Center Variations
NASA Astrophysics Data System (ADS)
Schoen, S.; Kersten, T.
2013-12-01
One important part for ensuring the high quality of the International GNSS Service's (IGS) products is the collection and publication of receiver - and satellite antenna phase center variations (PCV). The PCV are crucial for global and regional networks, since they introduce a global scale factor of up to 16ppb or changes in the height component with an amount of up to 10cm, respectively. Furthermore, antenna phase center variations are also important for precise orbit determination, navigation and positioning of mobile platforms, like e.g. the GOCE and GRACE gravity missions, or for the accurate Precise Point Positioning (PPP) processing. Using the EUREF Permanent Network (EPN), Baire et al. (2012) showed that individual PCV values have a significant impact on the geodetic positioning. The statements are further supported by studies of Steigenberger et al. (2013) where the impact of PCV for local-ties are analysed. Currently, there are five calibration institutions including the Institut für Erdmessung (IfE) contributing to the IGS PCV file. Different approaches like field calibrations and anechoic chamber measurements are in use. Additionally, the computation and parameterization of the PCV are completely different within the methods. Therefore, every new approach has to pass a benchmark test in order to ensure that variations of PCV values of an identical antenna obtained from different methods are as consistent as possible. Since the number of approaches to obtain these PCV values rises with the number of calibration institutions, there is the necessity for an adequate comparison concept, taking into account not only the numerical values but also stochastic information and computational issues of the determined PCVs. This is of special importance, since the majority of calibrated receiver antennas published by the IGS origin from absolute field calibrations based on the Hannover Concept, Wübbena et al. (2000). In this contribution, a concept for the adequate
Strain analysis from objects with a random distribution: A generalized center-to-center method
NASA Astrophysics Data System (ADS)
Shan, Yehua; Liang, Xinquan
2014-03-01
Existing methods of strain analysis such as the center-to-center method and the Fry method estimate strain from the spatial relationship between point objects in the deformed state. They assume a truncated Poisson distribution of point objects in the pre-deformed state. Significant deviations occur in nature and diffuse the central vacancy in a Fry plot, limiting the its effectiveness as a strain gauge. Therefore, a generalized center-to-center method is proposed to deal with point objects with the more general Poisson distribution, where the method outcomes do not depend on an analysis of a graphical central vacancy. This new method relies upon the probability mass function for the Poisson distribution, and adopts the maximum likelihood function method to solve for strain. The feasibility of the method is demonstrated by applying it to artificial data sets generated for known strains. Further analysis of these sets by use of the bootstrap method shows that the accuracy of the strain estimate has a strong tendency to increase either with point number or with the inclusion of more pre-deformation nearest neighbors. A poorly sorted, well packed, deformed conglomerate is analyzed, yielding strain estimate similar to the vector mean of the major axis directions of pebbles and the harmonic mean of their axial ratios from a shape-based strain determination method. These outcomes support the applicability of the new method to the analysis of deformed rocks with appropriate strain markers.
Smoke alarm tests may not adequately indicate smoke alarm function.
Peek-Asa, Corinne; Yang, Jingzhen; Hamann, Cara; Young, Tracy
2011-01-01
Smoke alarms are one of the most promoted prevention strategies to reduce residential fire deaths, and they can reduce residential fire deaths by half. Smoke alarm function can be measured by two tests: the smoke alarm button test and the chemical smoke test. Using results from a randomized trial of smoke alarms, we compared smoke alarm response to the button test and the smoke test. The smoke alarms found in the study homes at baseline were tested, as well as study alarms placed into homes as part of the randomized trial. Study alarms were tested at 12 and 42 months postinstallation. The proportion of alarms that passed the button test but not the smoke test ranged from 0.5 to 5.8% of alarms; this result was found most frequently among ionization alarms with zinc or alkaline batteries. These alarms would indicate to the owner (through the button test) that the smoke alarm was working, but the alarm would not actually respond in the case of a fire (as demonstrated by failing the smoke test). The proportion of alarms that passed the smoke test but not the button test ranged from 1.0 to 3.0%. These alarms would appear nonfunctional to the owner (because the button test failed), even though the alarm would operate in response to a fire (as demonstrated by passing the smoke test). The general public is not aware of the potential for inaccuracy in smoke alarm tests, and burn professionals can advocate for enhanced testing methods. The optimal test to determine smoke alarm function is the chemical smoke test. PMID:21747329
Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.
2015-01-01
Abstract. Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples. PMID:26305212
Kim, Diane N H; Teitell, Michael A; Reed, Jason; Zangle, Thomas A
2015-01-01
Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples. PMID:26305212
NASA Astrophysics Data System (ADS)
Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.
2015-11-01
Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples.
Probst, Yasmine; Zammit, Gail
2016-09-01
The importance of monitoring dietary intake within a randomized controlled trial becomes vital to justification of the study outcomes when the study is food-based. A systematic literature review was conducted to determine how dietary assessment methods used to monitor dietary intake are reported and whether assisted technologies are used in conducting such assessments. OVID and ScienceDirect databases 2000-2010 were searched for food-based, parallel, randomized controlled trials conducted with humans using the search terms "clinical trial," "diet$ intervention" AND "diet$ assessment," "diet$ method$," "intake," "diet history," "food record," "food frequency questionnaire," "FFQ," "food diary," "24-hour recall." A total of 1364 abstracts were reviewed and 243 studies identified. The size of the study and country of origin appear to be the two most common predictors of reporting both the dietary assessment method and details of the form of assessment. The journal in which the study is published has no impact. Information technology use may increase in the future allowing other methods and forms of dietary assessment to be used efficiently. PMID:26212597
Zhang, Jin; Chen, Cong
2016-09-20
In randomized oncology trials, patients in the control arm are sometimes permitted to switch to receive experimental drug after disease progression. This is mainly due to ethical reasons or to reduce the patient dropout rate. While progression-free survival is not usually impacted by crossover, the treatment effect on overall survival can be highly confounded. The rank-preserving structural failure time (RPSFT) model and iterative parametric estimation (IPE) are the main randomization-based methods used to adjust for confounding in the analysis of overall survival. While the RPSFT has been extensively studied, the properties of the IPE have not been thoroughly examined and its application is not common. In this manuscript, we clarify the re-censoring algorithm needed for IPE estimation and incorporate it into a method we propose as modified IPE (MIPE). We compared the MIPE and RPSFT via extensive simulations and then walked through the analysis using the modified IPE in a real clinical trial. We provided practical guidance on bootstrap by examining the performance in estimating the variance and confidence interval for the MIPE. Our results indicate that the MIPE method with the proposed re-censoring rule is an attractive alternative to the RPSFT method. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26919271
Estimates of Adequate School Spending by State Based on National Average Service Levels.
ERIC Educational Resources Information Center
Miner, Jerry
1983-01-01
Proposes a method for estimating expenditures per student needed to provide educational adequacy in each state. Illustrates the method using U.S., Arkansas, New York, Texas, and Washington State data, covering instruction, special needs, operations and maintenance, administration, and other costs. Estimates ratios of "adequate" to actual spending…
Arabidopsis: An Adequate Model for Dicot Root Systems?
Zobel, Richard W
2016-01-01
The Arabidopsis root system is frequently considered to have only three classes of root: primary, lateral, and adventitious. Research with other plant species has suggested up to eight different developmental/functional classes of root for a given plant root system. If Arabidopsis has only three classes of root, it may not be an adequate model for eudicot plant root systems. Recent research, however, can be interpreted to suggest that pre-flowering Arabidopsis does have at least five (5) of these classes of root. This then suggests that Arabidopsis root research can be considered an adequate model for dicot plant root systems. PMID:26904040
Evaluating the Bookmark Standard Setting Method: The Impact of Random Item Ordering
ERIC Educational Resources Information Center
Davis-Becker, Susan L.; Buckendahl, Chad W.; Gerrow, Jack
2011-01-01
Throughout the world, cut scores are an important aspect of a high-stakes testing program because they are a key operational component of the interpretation of test scores. One method for setting standards that is prevalent in educational testing programs--the Bookmark method--is intended to be a less cognitively complex alternative to methods…
MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA
Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D
2013-01-01
Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.
Ellery, Adam J; Baker, Ruth E; Simpson, Matthew J
2016-01-01
Migration of cells and molecules in vivo is affected by interactions with obstacles. These interactions can include crowding effects, as well as adhesion/repulsion between the motile cell/molecule and the obstacles. Here we present an analytical framework that can be used to separately quantify the roles of crowding and adhesion/repulsion using a lattice-based random walk model. Our method leads to an exact calculation of the long time Fickian diffusivity, and avoids the need for computationally expensive stochastic simulations. PMID:27597573
Tashima, Hideaki; Takeda, Masafumi; Suzuki, Hiroyuki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2010-06-21
We have shown that the application of double random phase encoding (DRPE) to biometrics enables the use of biometrics as cipher keys for binary data encryption. However, DRPE is reported to be vulnerable to known-plaintext attacks (KPAs) using a phase recovery algorithm. In this study, we investigated the vulnerability of DRPE using fingerprints as cipher keys to the KPAs. By means of computational experiments, we estimated the encryption key and restored the fingerprint image using the estimated key. Further, we propose a method for avoiding the KPA on the DRPE that employs the phase retrieval algorithm. The proposed method makes the amplitude component of the encrypted image constant in order to prevent the amplitude component of the encrypted image from being used as a clue for phase retrieval. Computational experiments showed that the proposed method not only avoids revealing the cipher key and the fingerprint but also serves as a sufficiently accurate verification system. PMID:20588510
NASA Astrophysics Data System (ADS)
Galindo-Torres, S. A.; Muñoz, J. D.; Alonso-Marroquín, F.
2010-11-01
Minkowski operators (dilation and erosion of sets in vector spaces) have been extensively used in computer graphics, image processing to analyze the structure of materials, and more recently in molecular dynamics. Here, we apply those mathematical concepts to extend the discrete element method to simulate granular materials with complex-shaped particles. The Voronoi-Minkowski diagrams are introduced to generate random packings of complex-shaped particles with tunable particle roundness. Contact forces and potentials are calculated in terms of distances instead of overlaps. By using the Verlet method to detect neighborhood, we achieve CPU times that grow linearly with the body’s number of sides. Simulations of dissipative granular materials under shear demonstrate that the method maintains conservation of energy in accord with the first law of thermodynamics. A series of simulations for biaxial test, shear band formation, hysteretic behavior, and ratcheting show that the model can reproduce the main features of real granular-soil behavior.
Is the Marketing Concept Adequate for Continuing Education?
ERIC Educational Resources Information Center
Rittenburg, Terri L.
1984-01-01
Because educators have a social responsibility to those they teach, the marketing concept may not be adequate as a philosophy for continuing education. In attempting to broaden the audience for continuing education, educators should consider a societal marketing concept to meet the needs of the educationally disadvantaged. (SK)
Comparability and Reliability Considerations of Adequate Yearly Progress
ERIC Educational Resources Information Center
Maier, Kimberly S.; Maiti, Tapabrata; Dass, Sarat C.; Lim, Chae Young
2012-01-01
The purpose of this study is to develop an estimate of Adequate Yearly Progress (AYP) that will allow for reliable and valid comparisons among student subgroups, schools, and districts. A shrinkage-type estimator of AYP using the Bayesian framework is described. Using simulated data, the performance of the Bayes estimator will be compared to…
9 CFR 305.3 - Sanitation and adequate facilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Sanitation and adequate facilities. 305.3 Section 305.3 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION...
Understanding Your Adequate Yearly Progress (AYP), 2011-2012
ERIC Educational Resources Information Center
Missouri Department of Elementary and Secondary Education, 2011
2011-01-01
The "No Child Left Behind Act (NCLB) of 2001" requires all schools, districts/local education agencies (LEAs) and states to show that students are making Adequate Yearly Progress (AYP). NCLB requires states to establish targets in the following ways: (1) Annual Proficiency Target; (2) Attendance/Graduation Rates; and (3) Participation Rates.…
Assessing Juvenile Sex Offenders to Determine Adequate Levels of Supervision.
ERIC Educational Resources Information Center
Gerdes, Karen E.; And Others
1995-01-01
This study analyzed the internal consistency of four inventories used by Utah probation officers to determine adequate and efficacious supervision levels and placement for juvenile sex offenders. Three factors accounted for 41.2 percent of variance (custodian's and juvenile's attitude toward intervention, offense characteristics, and historical…
34 CFR 200.13 - Adequate yearly progress in general.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false Adequate yearly progress in general. 200.13 Section 200.13 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION, DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE...
34 CFR 200.20 - Making adequate yearly progress.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false Making adequate yearly progress. 200.20 Section 200.20 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION, DEPARTMENT OF EDUCATION TITLE I-IMPROVING THE ACADEMIC ACHIEVEMENT OF THE DISADVANTAGED...
Do Beginning Teachers Receive Adequate Support from Their Headteachers?
ERIC Educational Resources Information Center
Menon, Maria Eliophotou
2012-01-01
The article examines the problems faced by beginning teachers in Cyprus and the extent to which headteachers are considered to provide adequate guidance and support to them. Data were collected through interviews with 25 school teachers in Cyprus, who had recently entered teaching (within 1-5 years) in public primary schools. According to the…
ERIC Educational Resources Information Center
Clark, Heddy Kovach; Ringwalt, Chris L.; Shamblen, Stephen R.; Hanley, Sean M.; Flewelling, Robert L.
2011-01-01
This exploratory study sought to determine if a popular school-based drug prevention program might be effective in schools that are making adequate yearly progress (AYP). Thirty-four schools with grades 6 through 8 in 11 states were randomly assigned either to receive Project ALERT (n = 17) or to a control group (n = 17); of these, 10 intervention…
Hui, Catherine; Joughin, Elaine; Nettel-Aguirre, Alberto; Goldstein, Simon; Harder, James; Kiefer, Gerhard; Parsons, David; Brauer, Carmen; Howard, Jason
2014-01-01
Background The Ponseti method of congenital idiopathic clubfoot correction has traditionally specified plaster of Paris (POP) as the cast material of choice; however, there are negative aspects to using POP. We sought to determine the influence of cast material (POP v. semirigid fibreglass [SRF]) on clubfoot correction using the Ponseti method. Methods Patients were randomized to POP or SRF before undergoing the Ponseti method. The primary outcome measure was the number of casts required for clubfoot correction. Secondary outcome measures included the number of casts by severity, ease of cast removal, need for Achilles tenotomy, brace compliance, deformity relapse, need for repeat casting and need for ancillary surgical procedures. Results We enrolled 30 patients: 12 randomized to POP and 18 to SRF. There was no difference in the number of casts required for clubfoot correction between the groups (p = 0.13). According to parents, removal of POP was more difficult (p < 0.001), more time consuming (p < 0.001) and required more than 1 method (p < 0.001). At a final follow-up of 30.8 months, the mean times to deformity relapse requiring repeat casting, surgery or both were 18.7 and 16.4 months for the SRF and POP groups, respectively. Conclusion There was no significant difference in the number of casts required for correction of clubfoot between the 2 materials, but SRF resulted in a more favourable parental experience, which cannot be ignored as it may have a positive impact on psychological well-being despite the increased cost associated. PMID:25078929