Urine sampling techniques in symptomatic primary-care patients: a diagnostic accuracy review.
Holm, Anne; Aabenhus, Rune
2016-06-08
Choice of urine sampling technique in urinary tract infection may impact diagnostic accuracy and thus lead to possible over- or undertreatment. Currently no evidencebased consensus exists regarding correct sampling technique of urine from women with symptoms of urinary tract infection in primary care. The aim of this study was to determine the accuracy of urine culture from different sampling-techniques in symptomatic non-pregnant women in primary care. A systematic review was conducted by searching Medline and Embase for clinical studies conducted in primary care using a randomized or paired design to compare the result of urine culture obtained with two or more collection techniques in adult, female, non-pregnant patients with symptoms of urinary tract infection. We evaluated quality of the studies and compared accuracy based on dichotomized outcomes. We included seven studies investigating urine sampling technique in 1062 symptomatic patients in primary care. Mid-stream-clean-catch had a positive predictive value of 0.79 to 0.95 and a negative predictive value close to 1 compared to sterile techniques. Two randomized controlled trials found no difference in infection rate between mid-stream-clean-catch, mid-stream-urine and random samples. At present, no evidence suggests that sampling technique affects the accuracy of the microbiological diagnosis in non-pregnant women with symptoms of urinary tract infection in primary care. However, the evidence presented is in-direct and the difference between mid-stream-clean-catch, mid-stream-urine and random samples remains to be investigated in a paired design to verify the present findings.
Imbalanced Learning for Functional State Assessment
NASA Technical Reports Server (NTRS)
Li, Feng; McKenzie, Frederick; Li, Jiang; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom
2011-01-01
This paper presents results of several imbalanced learning techniques applied to operator functional state assessment where the data is highly imbalanced, i.e., some function states (majority classes) have much more training samples than other states (minority classes). Conventional machine learning techniques usually tend to classify all data samples into majority classes and perform poorly for minority classes. In this study, we implemented five imbalanced learning techniques, including random undersampling, random over-sampling, synthetic minority over-sampling technique (SMOTE), borderline-SMOTE and adaptive synthetic sampling (ADASYN) to solve this problem. Experimental results on a benchmark driving lest dataset show thai accuracies for minority classes could be improved dramatically with a cost of slight performance degradations for majority classes,
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Investigation of spectral analysis techniques for randomly sampled velocimetry data
NASA Technical Reports Server (NTRS)
Sree, Dave
1993-01-01
It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
ERIC Educational Resources Information Center
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sampling scheme and the guidance document are available on EPA's PCB Web site at http://www.epa.gov/pcb, or... § 761.125(c) (2) through (4). Using its best engineering judgment, EPA may sample a statistically valid random or grid sampling technique, or both. When using engineering judgment or random “grab” samples, EPA...
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sampling scheme and the guidance document are available on EPA's PCB Web site at http://www.epa.gov/pcb, or... § 761.125(c) (2) through (4). Using its best engineering judgment, EPA may sample a statistically valid random or grid sampling technique, or both. When using engineering judgment or random “grab” samples, EPA...
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Reduction of display artifacts by random sampling
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.
1983-01-01
The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.
Work Sampling Study of an Engineering Professor during a Regular Contract Period
ERIC Educational Resources Information Center
Brink, Jan; McDonald, Dale B.
2015-01-01
Work sampling is a technique that has been employed in industry and fields such as healthcare for some time. It is a powerful technique, and an alternative to conventional stop watch time studies, used by industrial engineers to focus upon random work sampling observations. This study applies work sampling to the duties performed by an individual…
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
Ivahnenko, T.; Szabo, Z.; Gibs, J.
2001-01-01
Ground-water sampling techniques were modified to reduce random low-level contamination during collection of filtered water samples for determination of trace-element concentrations. The modified sampling techniques were first used in New Jersey by the US Geological Survey in 1994 along with inductively coupled plasma-mass spectrometry (ICP-MS) analysis to determine the concentrations of 18 trace elements at the one microgram-per-liter (μg/L) level in the oxic water of the unconfined sand and gravel Kirkwood-Cohansey aquifer system. The revised technique tested included a combination of the following: collection of samples (1) with flow rates of about 2L per minute, (2) through acid-washed single-use disposable tubing and (3) a single-use disposable 0.45-μm pore size capsule filter, (4) contained within portable glove boxes, (5) in a dedicated clean sampling van, (6) only after turbidity stabilized at values less than 2 nephelometric turbidity units (NTU), when possible. Quality-assurance data, obtained from equipment blanks and split samples, indicated that trace element concentrations, with the exception of iron, chromium, aluminum, and zinc, measured in the samples collected in 1994 were not subject to random contamination at 1μg/L.Results from samples collected in 1994 were compared to those from samples collected in 1991 from the same 12 PVC-cased observation wells using the available sampling and analytical techniques at that time. Concentrations of copper, lead, manganese and zinc were statistically significantly lower in samples collected in 1994 than in 1991. Sampling techniques used in 1994 likely provided trace-element data that represented concentrations in the aquifer with less bias than data from 1991 when samples were collected without the same degree of attention to sample handling.
ERIC Educational Resources Information Center
Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin
2017-01-01
The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…
Measurement of the absorption coefficient using the sound-intensity technique
NASA Technical Reports Server (NTRS)
Atwal, M.; Bernhard, R.
1984-01-01
The possibility of using the sound intensity technique to measure the absorption coefficient of a material is investigated. This technique measures the absorption coefficient by measuring the intensity incident on the sample and the net intensity reflected by the sample. Results obtained by this technique are compared with the standard techniques of measuring the change in the reverberation time and the standing wave ratio in a tube, thereby, calculating the random incident and the normal incident adsorption coefficient.
Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity
2007-09-01
transmits a 32,767 bit pseudo -random “short” code that repeats 37.5 times per second. Since the pseudo -random bit pattern and modulation scheme are... correlation process takes two “ sample windows,” both of which are ν = 16 samples wide and are spaced N = 64 samples apart, and compares them. When the...technique in (3.4) is a necessary step in order to get a more accurate estimate of the sample shift from the symbol boundary correlator in (3.1). Figure
The Impact of Education on Rural Women's Participation in Political and Economic Activities
ERIC Educational Resources Information Center
Bishaw, Alemayehu
2014-01-01
This study endeavored to investigate the impact of education on rural women's participation in political and economic activities. Six hundred rural women and 12 gender Activists were selected for this study from three Zones of Amhara Region, Ethiopia using multi-stage random sampling technique and purposeful sampling techniques respectively.…
Motivational Factors and Teachers Commitment in Public Secondary Schools in Mbale Municipality
ERIC Educational Resources Information Center
Olurotimi, Ogunlade Joseph; Asad, Kamonges Wahab; Abdulrauf, Abdulkadir
2015-01-01
The study investigated the influence of motivational factors on teachers' commitment in public Secondary School in Mbale Municipality. The study employed Cross-sectional survey design. The sampling technique used to select was simple random sampling technique. The instrument used to collect data was a self designed questionnaire. The data…
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
Soil Sampling Techniques For Alabama Grain Fields
NASA Technical Reports Server (NTRS)
Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.
2003-01-01
Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Errors in radial velocity variance from Doppler wind lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Errors in radial velocity variance from Doppler wind lidar
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...
2016-08-29
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
Improving Ramsey spectroscopy in the extreme-ultraviolet region with a random-sampling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eramo, R.; Bellini, M.; European Laboratory for Non-linear Spectroscopy
2011-04-15
Ramsey-like techniques, based on the coherent excitation of a sample by delayed and phase-correlated pulses, are promising tools for high-precision spectroscopic tests of QED in the extreme-ultraviolet (xuv) spectral region, but currently suffer experimental limitations related to long acquisition times and critical stability issues. Here we propose a random subsampling approach to Ramsey spectroscopy that, by allowing experimentalists to reach a given spectral resolution goal in a fraction of the usual acquisition time, leads to substantial improvements in high-resolution spectroscopy and may open the way to a widespread application of Ramsey-like techniques to precision measurements in the xuv spectral region.
ERIC Educational Resources Information Center
Jared, Nzabonimpa Buregeya
2011-01-01
The study examined the Influence of Secondary School Head Teachers' General and Instructional Supervisory Practices on Teachers' Work Performance. Qualitative and qualitative methods with a descriptive-correlational research approach were used in the study. Purposive sampling technique alongside random sampling technique was used to select the…
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Erlandsson, Lena; Rosenstierne, Maiken W.; McLoughlin, Kevin; Jaing, Crystal; Fomsgaard, Anders
2011-01-01
A common technique used for sensitive and specific diagnostic virus detection in clinical samples is PCR that can identify one or several viruses in one assay. However, a diagnostic microarray containing probes for all human pathogens could replace hundreds of individual PCR-reactions and remove the need for a clear clinical hypothesis regarding a suspected pathogen. We have established such a diagnostic platform for random amplification and subsequent microarray identification of viral pathogens in clinical samples. We show that Phi29 polymerase-amplification of a diverse set of clinical samples generates enough viral material for successful identification by the Microbial Detection Array, demonstrating the potential of the microarray technique for broad-spectrum pathogen detection. We conclude that this method detects both DNA and RNA virus, present in the same sample, as well as differentiates between different virus subtypes. We propose this assay for diagnostic analysis of viruses in clinical samples. PMID:21853040
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator); Lock, B. F.
1976-01-01
The author has identified the following significant results. Scope of the preprocessing techniques was restricted to standard material from the EROS Data Center accompanied by some enlarging procedures and the use of the diazo process. Investigation has shown that the most appropriate sampling strategy for this study is the stratified random technique. A viable sampling procedure, together with a method for determining minimum number of sample points in order to test results of any interpretation are presented.
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
NASA Technical Reports Server (NTRS)
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
Creating ensembles of decision trees through sampling
Kamath, Chandrika; Cantu-Paz, Erick
2005-08-30
A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.
A Dexterous Optional Randomized Response Model
ERIC Educational Resources Information Center
Tarray, Tanveer A.; Singh, Housila P.; Yan, Zaizai
2017-01-01
This article addresses the problem of estimating the proportion Pi[subscript S] of the population belonging to a sensitive group using optional randomized response technique in stratified sampling based on Mangat model that has proportional and Neyman allocation and larger gain in efficiency. Numerically, it is found that the suggested model is…
Group Matching: Is This a Research Technique to Be Avoided?
ERIC Educational Resources Information Center
Ross, Donald C.; Klein, Donald F.
1988-01-01
The variance of the sample difference and the power of the "F" test for mean differences were studied under group matching on covariates and also under random assignment. Results shed light on systematic assignment procedures advocated to provide more precise estimates of treatment effects than simple random assignment. (TJH)
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
K-Fold Crossvalidation in Canonical Analysis.
ERIC Educational Resources Information Center
Liang, Kun-Hsia; And Others
1995-01-01
A computer-assisted, K-fold cross-validation technique is discussed in the framework of canonical correlation analysis of randomly generated data sets. Analysis results suggest that this technique can effectively reduce the contamination of canonical variates and canonical correlations by sample-specific variance components. (Author/SLD)
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao
2018-01-01
This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry
ERIC Educational Resources Information Center
Stier, Sam
2010-01-01
Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429
Attenuation of species abundance distributions by sampling
Shimadzu, Hideyasu; Darnell, Ross
2015-01-01
Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Accounting for selection bias in association studies with complex survey data.
Wirth, Kathleen E; Tchetgen Tchetgen, Eric J
2014-05-01
Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.
Comparison of dialysis membrane diffusion samplers and two purging methods in bedrock wells
Imbrigiotta, T.E.; Ehlke, T.A.; Lacombe, P.J.; Dale, J.M.; ,
2002-01-01
Collection of ground-water samples from bedrock wells using low-flow purging techniques is problematic because of the random spacing, variable hydraulic conductivity, and variable contamination of contributing fractures in each well's open interval. To test alternatives to this purging method, a field comparison of three ground-water-sampling techniques was conducted on wells in fractured bedrock at a site contaminated primarily with volatile organic compounds. Constituent concentrations in samples collected with a diffusion sampler constructed from dialysis membrane material were compared to those in samples collected from the same wells with a standard low-flow purging technique and a hybrid (high-flow/low-flow) purging technique. Concentrations of trichloroethene, cis-1,2-dichloroethene, vinyl chloride, calcium, chloride, and alkalinity agreed well among samples collected with all three techniques in 9 of the 10 wells tested. Iron concentrations varied more than those of the other parameters, but their pattern of variation was not consistent. Overall, the results of nonparametric analysis of variance testing on the nine wells sampled twice showed no statistically significant difference at the 95-percent confidence level among the concentrations of volatile organic compounds or inorganic constituents recovered by use of any of the three sampling techniques.
Scientific Temper among Academically High and Low Achieving Adolescent Girls
ERIC Educational Resources Information Center
Kour, Sunmeet
2015-01-01
The present study was undertaken to compare the scientific temper of high and low achieving adolescent girl students. Random sampling technique was used to draw the sample from various high schools of District Srinagar. The sample for the present study consisted of 120 school going adolescent girls (60 high and 60 low achievers). Data was…
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria
ERIC Educational Resources Information Center
Emetere, Moses E.
2014-01-01
This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
On the classification techniques in data mining for microarray data classification
NASA Astrophysics Data System (ADS)
Aydadenta, Husna; Adiwijaya
2018-03-01
Cancer is one of the deadly diseases, according to data from WHO by 2015 there are 8.8 million more deaths caused by cancer, and this will increase every year if not resolved earlier. Microarray data has become one of the most popular cancer-identification studies in the field of health, since microarray data can be used to look at levels of gene expression in certain cell samples that serve to analyze thousands of genes simultaneously. By using data mining technique, we can classify the sample of microarray data thus it can be identified with cancer or not. In this paper we will discuss some research using some data mining techniques using microarray data, such as Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5, and simulation of Random Forest algorithm with technique of reduction dimension using Relief. The result of this paper show performance measure (accuracy) from classification algorithm (SVM, ANN, Naive Bayes, kNN, C4.5, and Random Forets).The results in this paper show the accuracy of Random Forest algorithm higher than other classification algorithms (Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5). It is hoped that this paper can provide some information about the speed, accuracy, performance and computational cost generated from each Data Mining Classification Technique based on microarray data.
Comparison of efficacy of pulverization and sterile paper point techniques for sampling root canals.
Tran, Kenny T; Torabinejad, Mahmoud; Shabahang, Shahrokh; Retamozo, Bonnie; Aprecio, Raydolfo M; Chen, Jung-Wei
2013-08-01
The purpose of this study was to compare the efficacy of the pulverization and sterile paper point techniques for sampling root canals using 5.25% NaOCl/17% EDTA and 1.3% NaOCl/MTAD (Dentsply, Tulsa, OK) as irrigation regimens. Single-canal extracted human teeth were decoronated and infected with Enterococcus faecalis. Roots were randomly assigned to 2 irrigation regimens: group A with 5.25% NaOCl/17% EDTA (n = 30) and group B with 1.3% NaOCl/MTAD (n = 30). After chemomechanical debridement, bacterial samplings were taken using sterile paper points and pulverized powder of the apical 5 mm root ends. The sterile paper point technique did not show growth in any samples. The pulverization technique showed growth in 24 of the 60 samples. The Fisher exact test showed significant differences between sampling techniques (P < .001). The sterile paper point technique showed no difference between irrigation regimens. However, 17 of the 30 roots in group A and 7 of the 30 roots in group B resulted in growth as detected by pulverization technique. Data showed a significant difference between irrigation regimens (P = .03) in pulverization technique. The pulverization technique was more efficacious in detecting viable bacteria. Furthermore, this technique showed that 1.3% NaOCl/MTAD regimen was more effective in disinfecting root canals. Published by Elsevier Inc.
Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao
2018-01-01
This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique. PMID:29293593
Types of Bullying in the Senior High Schools in Ghana
ERIC Educational Resources Information Center
Antiri, Kwasi Otopa
2016-01-01
The main objective of the study was to examine the types of bullying that were taking place in the senior high schools in Ghana. A multi-stage sampling procedure, comprising purposive, simple random and snowball sampling technique, was used in the selection of the sample. A total of 354 respondents were drawn six schools in Ashanti, Central and…
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Sampling designs for HIV molecular epidemiology with application to Honduras.
Shepherd, Bryan E; Rossini, Anthony J; Soto, Ramon Jeremias; De Rivera, Ivette Lorenzana; Mullins, James I
2005-11-01
Proper sampling is essential to characterize the molecular epidemiology of human immunodeficiency virus (HIV). HIV sampling frames are difficult to identify, so most studies use convenience samples. We discuss statistically valid and feasible sampling techniques that overcome some of the potential for bias due to convenience sampling and ensure better representation of the study population. We employ a sampling design called stratified cluster sampling. This first divides the population into geographical and/or social strata. Within each stratum, a population of clusters is chosen from groups, locations, or facilities where HIV-positive individuals might be found. Some clusters are randomly selected within strata and individuals are randomly selected within clusters. Variation and cost help determine the number of clusters and the number of individuals within clusters that are to be sampled. We illustrate the approach through a study designed to survey the heterogeneity of subtype B strains in Honduras.
An Assessment on Awareness and Acceptability of Child Adoption in Edo State
ERIC Educational Resources Information Center
Aluyor, P.; Salami, L. I.
2017-01-01
The study examines the awareness and acceptability of child adoption in Edo State. The design used for the study was survey design. The population for the study is made up of adults male and female in Esan West Local Government Area. One hundred respondents were randomly selected using random sampling techniques. The validity was ascertained by…
Multilattice sampling strategies for region of interest dynamic MRI.
Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E
2013-08-01
A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.
Edward, Joseph; Aziz, Mubarak A; Madhu Usha, Arjun; Narayanan, Jyothi K
2017-12-01
Extractions are routine procedures in dental surgery. Traditional extraction techniques use a combination of severing the periodontal attachment, luxation with an elevator, and removal with forceps. A new technique of extraction of maxillary third molar is introduced in this study-Joedds technique, which is compared with the conventional technique. One hundred people were included in the study, the people were divided into two groups by means of simple random sampling. In one group conventional technique of maxillary third molar extraction was used and on second Joedds technique was used. Statistical analysis was carried out with student's t test. Analysis of 100 patients based on parameters showed that the novel joedds technique had minimal trauma to surrounding tissues, less tuberosity and root fractures and the time taken for extraction was <2 min while compared to other group of patients. This novel technique has proved to be better than conventional third molar extraction technique, with minimal complications. If Proper selection of cases and right technique are used.
An Analysis of Job Satisfaction Among Public, College or University, and Special Librarians.
ERIC Educational Resources Information Center
Miniter, John J.
Usable data relating to six elements of job satisfaction: work, supervision, people, pay, promotion, and total satisfaction, were collected from 190 of a total sample of 310 librarians, chosen by stratified random sampling techniques from library association membership lists. The librarians, both male and female, represented three types of…
Examining Work and Family Conflict among Female Bankers in Accra Metropolis, Ghana
ERIC Educational Resources Information Center
Kissi-Abrokwah, Bernard; Andoh-Robertson, Theophilus; Tutu-Danquah, Cecilia; Agbesi, Catherine Selorm
2015-01-01
This study investigated the effects and solutions of work and family conflict among female bankers in Accra Metropolis. Using triangulatory mixed method design, a structured questionnaire was randomly administered to 300 female bankers and 15 female Bankers who were interviewed were also sampled by using convenient sampling technique. The…
A Study of Occupational Stress and Organizational Climate of Higher Secondary Teachers
ERIC Educational Resources Information Center
Benedicta, A. Sneha
2014-01-01
This study mainly aims to describe the occupational stress and organizational climate of higher secondary teachers with regard to gender, locality, family type, experience and type of management. Simple random sampling technique was adopted for the selection of sample. The data is collected from 200 higher secondary teachers from government and…
Factors Influencing Mathematic Problem-Solving Ability of Sixth Grade Students
ERIC Educational Resources Information Center
Pimta, Sakorn; Tayraukham, Sombat; Nuangchalerm, Prasart
2009-01-01
Problem statement: This study aims to investigate factors influencing mathematic problem-solving ability of sixth grade students. One thousand and twenty eight of sixth grade students, studying in the second semester of academic year 2007 were sampled by stratified random sampling technique. Approach: The research instruments used in the study…
Development and Validation of Academic Dishonesty Scale (ADS): Presenting a Multidimensional Scale
ERIC Educational Resources Information Center
Bashir, Hilal; Bala, Ranjan
2018-01-01
The purpose of the study was to develop a scale measuring academic dishonesty of undergraduate students. The sample of the study constitutes nine hundred undergraduate students selected via random sampling technique. After receiving expert's opinions for the face and content validity of the scale, the exploratory factor analysis (EFA) and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, W.G.
Scapteriscus vicinus is the most important pest of turf and pasture grasses in Florida. This study develops a method of correlating sample results with true population density and provides the first quantitative information on spatial distribution and movement patterns of mole crickets. Three basic techniques for sampling mole crickets were compared: soil flushes, soil corer, and pitfall trapping. No statistical difference was found between the soil corer and soil flushing. Soil flushing was shown to be more sensitive to changes in population density than pitfall trapping. No technique was effective for sampling adults. Regression analysis provided a means of adjustingmore » for the effects of soil moisture and showed soil temperature to be unimportant in predicting efficiency of flush sampling. Cesium-137 was used to label females for subsequent location underground. Comparison of mean distance to nearest neighbor with the distance predicted by a random distribution model showed that the observed distance in the spring was significantly greater than hypothesized (Student's T-test, p < 0.05). Fall adult nearest neighbor distance was not different than predicted by the random distribution hypothesis.« less
Webb, Christian A.; DeRubeis, Robert J.; Dimidjian, Sona; Hollon, Steven D.; Amsterdam, Jay D.; Shelton, Richard C.
2014-01-01
Objective Previous research has found that therapist adherence to concrete, problem-focused cognitive therapy (CT) techniques predicts depressive symptom change (e.g., Feeley, DeRubeis, & Gelfand, 1999). More recently, Strunk, DeRubeis, Chui, and Alvarez (2007) demonstrated that in-session evidence of patients’ use of CT skills was related to a lower rate of relapse in the year following CT for depression. The current investigation attempts to integrate and extend these findings within 2 separate samples of patients and therapists. Method Drawing from the CT samples (N = 105, mean age = 40 years, female = 62%, White = 82%) of 2 published randomized clinical trials of depression treatment, we conducted analyses to examine whether therapist adherence to concrete CT techniques (Collaborative Study Psychotherapy Rating Scale) and the quality of the therapeutic alliance (Working Alliance Inventory) predict patients’ use of CT skills (Performance of Cognitive Therapy Strategies) and subsequent Beck Depression Inventory symptom change. Results Results indicated a differential pattern of prediction in the 2 samples. In one, CT techniques exhibited a stronger association with patient CT skills and symptom change than did the alliance, whereas the reverse pattern emerged in the second sample. A baseline symptom severity × CT techniques interaction indicated that between-study differences in intake depression severity might in part explain the process– outcome differences. Conclusions The present findings suggest that the nature of the therapy sample examined may moderate process–outcome findings in psychotherapy research. The implications of these results and directions for future research are discussed. PMID:22468907
Characterization of Friction Joints Subjected to High Levels of Random Vibration
NASA Technical Reports Server (NTRS)
deSantos, Omar; MacNeal, Paul
2012-01-01
This paper describes the test program in detail including test sample description, test procedures, and vibration test results of multiple test samples. The material pairs used in the experiment were Aluminum-Aluminum, Aluminum- Dicronite coated Aluminum, and Aluminum-Plasmadize coated Aluminum. Levels of vibration for each set of twelve samples of each material pairing were gradually increased until all samples experienced substantial displacement. Data was collected on 1) acceleration in all three axes, 2) relative static displacement between vibration runs utilizing photogrammetry techniques, and 3) surface galling and contaminant generation. This data was used to estimate the values of static friction during random vibratory motion when "stick-slip" occurs and compare these to static friction coefficients measured before and after vibration testing.
Kim, Ho-Joong; Kang, Kyoung-Tak; Park, Sung-Cheol; Kwon, Oh-Hyo; Son, Juhyun; Chang, Bong-Soon; Lee, Choon-Ki; Yeom, Jin S; Lenke, Lawrence G
2017-05-01
There have been conflicting results on the surgical outcome of lumbar fusion surgery using two different techniques: robot-assisted pedicle screw fixation and conventional freehand technique. In addition, there have been no studies about the biomechanical issues between both techniques. This study aimed to investigate the biomechanical properties in terms of stress at adjacent segments using robot-assisted pedicle screw insertion technique (robot-assisted, minimally invasive posterior lumbar interbody fusion, Rom-PLIF) and freehand technique (conventional, freehand, open approach, posterior lumbar interbody fusion, Cop-PLIF) for instrumented lumbar fusion surgery. This is an additional post-hoc analysis for patient-specific finite element (FE) model. The sample is composed of patients with degenerative lumbar disease. Intradiscal pressure and facet contact force are the outcome measures. Patients were randomly assigned to undergo an instrumented PLIF procedure using a Rom-PLIF (37 patients) or a Cop-PLIF (41), respectively. Five patients in each group were selected using a simple random sampling method after operation, and 10 preoperative and postoperative lumbar spines were modeled from preoperative high-resolution computed tomography of 10 patients using the same method for a validated lumbar spine model. Under four pure moments of 7.5 Nm, the changes in intradiscal pressure and facet joint contact force at the proximal adjacent segment following fusion surgery were analyzed and compared with preoperative states. The representativeness of random samples was verified. Both groups showed significant increases in postoperative intradiscal pressure at the proximal adjacent segment under four moments, compared with the preoperative state. The Cop-PLIF models demonstrated significantly higher percent increments of intradiscal pressure at proximal adjacent segments under extension, lateral bending, and torsion moments than the Rom-PLIF models (p=.032, p=.008, and p=.016, respectively). Furthermore, the percent increment of facet contact force was significantly higher in the Cop-PLIF models under extension and torsion moments than in the Rom-PLIF models (p=.016 under both extension and torsion moments). The present study showed the clinical application of subject-specific FE analysis in the spine. Even though there was biomechanical superiority of the robot-assisted insertions in terms of alleviation of stress increments at adjacent segments after fusion, cautious interpretation is needed because of the small sample size. Copyright © 2016 Elsevier Inc. All rights reserved.
Impact of Oriented Clay Particles on X-Ray Spectroscopy Analysis
NASA Astrophysics Data System (ADS)
Lim, A. J. M. S.; Syazwani, R. N.; Wijeyesekera, D. C.
2016-07-01
Understanding the engineering properties of the mineralogy and microfabic of clayey soils is very complex and thus very difficult for soil characterization. Micromechanics of soils recognize that the micro structure and mineralogy of clay have a significant influence on its engineering behaviour. To achieve a more reliable quantitative evaluation of clay mineralogy, a proper sample preparation technique for quantitative clay mineral analysis is necessary. This paper presents the quantitative evaluation of elemental analysis and chemical characterization of oriented and random oriented clay particles using X-ray spectroscopy. Three different types of clays namely marine clay, bentonite and kaolin clay were studied. The oriented samples were prepared by placing the dispersed clay in water and left to settle on porous ceramic tiles by applying a relatively weak suction through a vacuum pump. Images form a Scanning Electron Microscope (SEM) was also used to show the comparison between the orientation patterns of both the sample preparation techniques. From the quantitative analysis of the X-ray spectroscopy, oriented sampling method showed more accuracy in identifying mineral deposits, because it produced better peak intensity on the spectrum and more mineral content can be identified compared to randomly oriented samples.
Carlos Alberto Silva; Carine Klauberg; Andrew Thomas Hudak; Lee Alexander Vierling; Wan Shafrina Wan Mohd Jaafar; Midhun Mohan; Mariano Garcia; Antonio Ferraz; Adrian Cardil; Sassan Saatchi
2017-01-01
Improvements in the management of pine plantations result in multiple industrial and environmental benefits. Remote sensing techniques can dramatically increase the efficiency of plantation management by reducing or replacing time-consuming field sampling. We tested the utility and accuracy of combining field and airborne lidar data with Random Forest, a supervised...
Computer methods for sampling from the gamma distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, M.E.; Tadikamalla, P.R.
1978-01-01
Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.
NASA Technical Reports Server (NTRS)
Tolson, R. H.
1981-01-01
A technique is described for providing a means of evaluating the influence of spatial sampling on the determination of global mean total columnar ozone. A finite number of coefficients in the expansion are determined, and the truncated part of the expansion is shown to contribute an error to the estimate, which depends strongly on the spatial sampling and is relatively insensitive to data noise. First and second order statistics are derived for each term in a spherical harmonic expansion which represents the ozone field, and the statistics are used to estimate systematic and random errors in the estimates of total ozone.
ERIC Educational Resources Information Center
Sekar, J. Master Arul; Lawrence, A.S. Arul
2016-01-01
The present study aims to investigate whether there is any significant relationship between adjustment and academic achievement of higher secondary school students. In this survey study, the investigators used stratified random sampling technique for selecting the sample from the population. The stratification was done on the basis of gender and…
Jackknifing Techniques for Evaluation of Equating Accuracy. Research Report. ETS RR-09-39
ERIC Educational Resources Information Center
Haberman, Shelby J.; Lee, Yi-Hsuan; Qian, Jiahe
2009-01-01
Grouped jackknifing may be used to evaluate the stability of equating procedures with respect to sampling error and with respect to changes in anchor selection. Properties of grouped jackknifing are reviewed for simple-random and stratified sampling, and its use is described for comparisons of anchor sets. Application is made to examples of item…
Strategies for Coping with the Challenges of Incarceration among Nigerian Prison Inmates
ERIC Educational Resources Information Center
Agbakwuru, Chikwe; Awujo, Grace C.
2016-01-01
This paper investigated the strategies for coping with the challenges of incarceration among inmates of Port Harcourt Prison, Nigeria. The population was 2,997 inmates of the prison while the sample was 250 inmates drawn through stratified random sampling technique from the same Port Harcourt prison. Six research questions were posed and data for…
Implementation of Structured Inquiry Based Model Learning toward Students' Understanding of Geometry
ERIC Educational Resources Information Center
Salim, Kalbin; Tiawa, Dayang Hjh
2015-01-01
The purpose of this study is implementation of a structured inquiry learning model in instruction of geometry. The model used is a model with a quasi-experimental study amounted to two classes of samples selected from the population of the ten classes with cluster random sampling technique. Data collection tool consists of a test item…
ERIC Educational Resources Information Center
Adegoke, Sunday Paul; Osokoya, Modupe M.
2015-01-01
This study investigated access to internet and socio-economic background as correlates of students' achievement in Agricultural Science among selected Senior Secondary Schools Two Students in Ogbomoso South and North Local Government Areas. The study adopted multi-stage sampling technique. Simple random sampling was used to select 30 students from…
ERIC Educational Resources Information Center
Kariuki, Patrick; Paulson, Ronda
The purpose of this study was to examine the effectiveness of computer-animated dissection techniques versus the effectiveness of traditional dissection techniques as related to student achievement. The sample used was 104 general biology students from a small, rural high school in Northeast Tennessee. Random selection was used to separate the…
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Systematic random sampling of the comet assay.
McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan
2009-07-01
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
ERIC Educational Resources Information Center
Suleman, Qaiser; Gul, Rizwana
2015-01-01
The current study explores the challenges faced by public secondary schools in successful implementation of total quality management (TQM) in Kohat District. A sample of 25 heads and 75 secondary school teachers selected from 25 public secondary schools through simple random sampling technique was used. Descriptive research designed was used and a…
The Effective Management of Primary Schools in Ekiti State, Nigeria: An Analytical Assessment
ERIC Educational Resources Information Center
Adeyemi, T. O.
2009-01-01
This study investigated the management of education in primary schools in Ekiti State, Nigeria. As a correlational research, the study population comprised all the 694 primary schools in the State. Out of this, a sample of 320 schools was selected through the stratified random sampling technique. Two instruments were used to collect data for the…
ERIC Educational Resources Information Center
Naji Qasem, Mamun Ali; Ahmad Gul, Showkeen Bilal
2014-01-01
The study was conducted to know the effect of items direction (positive or negative) on the factorial construction and criterion related validity in Likert scale. The descriptive survey research method was used for the study and the sample consisted of 510 undergraduate students selected by used random sampling technique. A scale developed by…
ERIC Educational Resources Information Center
Noor ul Amin, Syed
2017-01-01
The purpose of the present study was to compare the Internet-user and Internet Non-user post-graduate students on their attitude towards research. The sample comprised 600 post graduate students (300 Internet-users and 300 Internet-Non-users) drawn from different faculties of University of Kashmir (J&K), India. Random sampling technique was…
ERIC Educational Resources Information Center
Bowen, Michelle; Laurion, Suzanne
A study documented, using a telephone survey, the incidence rates of sexual harassment of mass communication interns, and compared those rates to student and professional rates. A probability sample of 44 male and 52 female mass communications professionals was generated using several random sampling techniques from among professionals who work in…
ERIC Educational Resources Information Center
Ibrahim, Hatim G.
2017-01-01
The current study aims to identify the utilization of innovations and techniques of educational technology in teaching of educational practicum and its impact on increasing academic achievement among pre-service teachers. The study sample consisted of (60) pre-service teachers (student teachers) randomly selected from public middle and secondary…
NASA Technical Reports Server (NTRS)
Chapman, G. M. (Principal Investigator); Carnes, J. G.
1981-01-01
Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.
NASA Astrophysics Data System (ADS)
Chin, Fun-Tat; Lin, Yu-Hsien; Yang, Wen-Luh; Liao, Chin-Hsuan; Lin, Li-Min; Hsiao, Yu-Ping; Chao, Tien-Sheng
2015-01-01
A limited copper (Cu)-source Cu:SiO2 switching layer composed of various Cu concentrations was fabricated using a chemical soaking (CS) technique. The switching layer was then studied for developing applications in resistive random access memory (ReRAM) devices. Observing the resistive switching mechanism exhibited by all the samples suggested that Cu conductive filaments formed and ruptured during the set/reset process. The experimental results indicated that the endurance property failure that occurred was related to the joule heating effect. Moreover, the endurance switching cycle increased as the Cu concentration decreased. In high-temperature tests, the samples demonstrated that the operating (set/reset) voltages decreased as the temperature increased, and an Arrhenius plot was used to calculate the activation energy of the set/reset process. In addition, the samples demonstrated stable data retention properties when baked at 85 °C, but the samples with low Cu concentrations exhibited short retention times in the low-resistance state (LRS) during 125 °C tests. Therefore, Cu concentration is a crucial factor in the trade-off between the endurance and retention properties; furthermore, the Cu concentration can be easily modulated using this CS technique.
Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Diffusing Wave Spectroscopy Used to Study Foams
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A.; Durian, Douglas J.
2000-01-01
The white appearance of familiar objects such as clouds, snow, milk, or foam is due to the random scattering of light by the sample. As we all know, pure water is clear and easily passes a beam of light. However, tiny water droplets, such as those in a cloud, scatter light because the air and water droplet have different indexes of refraction. When many droplets, or scattering sites, are present, the incident light is scattered in random directions and the sample takes on a milky white appearance. In a glass of milk, the scattering is due to small colloidal particles. The white appearance of shaving cream, or foam, is due to the scattering of light at the water-bubble interface. Diffusing wave spectroscopy (DWS) is a laser light-scattering technique used to noninvasively probe the particle dynamics in systems that strongly scatter light. The technique takes advantage of the diffuse nature of light, which is reflected or transmitted from samples such as foams, dense colloidal suspensions (such as paint and milk), emulsions, liquid crystals, sandpiles, and even biological tissues.
Improving semi-automated segmentation by integrating learning with active sampling
NASA Astrophysics Data System (ADS)
Huo, Jing; Okada, Kazunori; Brown, Matthew
2012-02-01
Interactive segmentation algorithms such as GrowCut usually require quite a few user interactions to perform well, and have poor repeatability. In this study, we developed a novel technique to boost the performance of the interactive segmentation method GrowCut involving: 1) a novel "focused sampling" approach for supervised learning, as opposed to conventional random sampling; 2) boosting GrowCut using the machine learned results. We applied the proposed technique to the glioblastoma multiforme (GBM) brain tumor segmentation, and evaluated on a dataset of ten cases from a multiple center pharmaceutical drug trial. The results showed that the proposed system has the potential to reduce user interaction while maintaining similar segmentation accuracy.
ERIC Educational Resources Information Center
Sawangsamutchai, Yutthasak; Rattanavich, Saowalak
2016-01-01
The objective of this research is to compare the English reading comprehension and motivation to read of seventh grade Thai students taught with applied instruction through the genre-based approach and teachers' manual. A randomized pre-test post-test control group design was used through the cluster random sampling technique. The data were…
Ear Acupuncture for Acute Sore Throat: A Randomized Controlled Trial
2014-09-26
SEP 2014 2. REPORT TYPE Final 3. DATES COVERED 4. TITLE AND SUBTITLE Ear acupuncture for acute sore throat. A randomized controlled trial...Auncular Acupuncture is a low risk option for acute pain control •Battlefield acupuncture (BFA) IS a specific auncular acupuncture technique •BFA IS...Strengths: Prospect1ve RCT •Weaknesses Small sample stze. no sham acupuncture performed, patients not blinded to treatment •Th1s study represents an
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
ESDA®-Lite collection of DNA from latent fingerprints on documents.
Plaza, Dane T; Mealy, Jamia L; Lane, J Nicholas; Parsons, M Neal; Bathrick, Abigail S; Slack, Donia P
2015-05-01
The ability to detect and non-destructively collect biological samples for DNA processing would benefit the forensic community by preserving the physical integrity of evidentiary items for more thorough evaluations by other forensic disciplines. The Electrostatic Detection Apparatus (ESDA®) was systemically evaluated for its ability to non-destructively collect DNA from latent fingerprints deposited on various paper substrates for short tandem repeat (STR) DNA profiling. Fingerprints were deposited on a variety of paper substrates that included resume paper, cotton paper, magazine paper, currency, copy paper, and newspaper. Three DNA collection techniques were performed: ESDA collection, dry swabbing, and substrate cutting. Efficacy of each collection technique was evaluated by the quantity of DNA present in each sample and the percent profile generated by each sample. Both the ESDA and dry swabbing non-destructive sampling techniques outperformed the destructive methodology of substrate cutting. A greater number of full profiles were generated from samples collected with the non-destructive dry swabbing collection technique than were generated from samples collected with the ESDA; however, the ESDA also allowed the user to visualize the area of interest while non-destructively collecting the biological material. The ability to visualize the biological material made sampling straightforward and eliminated the need for numerous, random swabbings/cuttings. Based on these results, the evaluated non-destructive ESDA collection technique has great potential for real-world forensic implementation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!
Vetter, Thomas R; Mascha, Edward J
2017-09-01
Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.
Wampler, Peter J; Rediske, Richard R; Molla, Azizur R
2013-01-18
A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods.
2012-03-01
with each SVM discriminating between a pair of the N total speakers in the data set. The (( + 1))/2 classifiers then vote on the final...classification of a test sample. The Random Forest classifier is an ensemble classifier that votes amongst decision trees generated with each node using...Forest vote , and the effects of overtraining will be mitigated by the fact that each decision tree is overtrained differently (due to the random
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less
Event-triggered synchronization for reaction-diffusion complex networks via random sampling
NASA Astrophysics Data System (ADS)
Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng
2018-04-01
In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Incorporating uncertainty and motion in Intensity Modulated Radiation Therapy treatment planning
NASA Astrophysics Data System (ADS)
Martin, Benjamin Charles
In radiation therapy, one seeks to destroy a tumor while minimizing the damage to surrounding healthy tissue. Intensity Modulated Radiation Therapy (IMRT) uses overlapping beams of x-rays that add up to a high dose within the target and a lower dose in the surrounding healthy tissue. IMRT relies on optimization techniques to create high quality treatments. Unfortunately, the possible conformality is limited by the need to ensure coverage even if there is organ movement or deformation. Currently, margins are added around the tumor to ensure coverage based on an assumed motion range. This approach does not ensure high quality treatments. In the standard IMRT optimization problem, an objective function measures the deviation of the dose from the clinical goals. The optimization then finds the beamlet intensities that minimize the objective function. When modeling uncertainty, the dose delivered from a given set of beamlet intensities is a random variable. Thus the objective function is also a random variable. In our stochastic formulation we minimize the expected value of this objective function. We developed a problem formulation that is both flexible and fast enough for use on real clinical cases. While working on accelerating the stochastic optimization, we developed a technique of voxel sampling. Voxel sampling is a randomized algorithms approach to a steepest descent problem based on estimating the gradient by only calculating the dose to a fraction of the voxels within the patient. When combined with an automatic sampling rate adaptation technique, voxel sampling produced an order of magnitude speed up in IMRT optimization. We also develop extensions of our results to Intensity Modulated Proton Therapy (IMPT). Due to the physics of proton beams the stochastic formulation yields visibly different and better plans than normal optimization. The results of our research have been incorporated into a software package OPT4D, which is an IMRT and IMPT optimization tool that we developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Jakeman, John; Gittelson, Claude
2015-01-08
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less
NASA Astrophysics Data System (ADS)
Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto
2017-04-01
The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.
Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.
Blutke, Andreas; Wanke, Rüdiger
2018-03-06
In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.
Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices
NASA Astrophysics Data System (ADS)
Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.
2017-08-01
We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.
ERIC Educational Resources Information Center
Chidi, Christopher O.; Shadare, Oluseyi A.
2011-01-01
This study investigated the influence of host community on industrial relations practices and policies using Agbara community and Power Holding Company of Nigeria PLC as a case. The study adopted both the qualitative and quantitative methods. A total of 120 samples were drawn from the population using the simple random sampling technique in which…
2009-08-01
assess the performance of remedial efforts. These techniques are expensive and, by themselves, are effectively random samples guided by the training...technology should be further explored and developed for use in pre-amendment tracer tests and quantitative remedial assessments . 15. SUBJECT TERMS...and flow of injectate. Site assessment following groundwater remediation efforts typically involves discrete point sampling using wells or
ERIC Educational Resources Information Center
Ibezim, Don O.; McCracken, J. David
A study examined the extent to which international agricultural dimensions were taught in secondary agricultural programs and factors associated with the extent of integration. A systematic sampling technique was used to select a random sample of 332 of the 2,612 secondary agricultural teachers in 12 states of the North Central United States. Of…
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
General statistical considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eberhardt, L L; Gilbert, R O
From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less
NASA Astrophysics Data System (ADS)
Munahefi, D. N.; Waluya, S. B.; Rochmad
2018-03-01
The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.
A seamless acquisition digital storage oscilloscope with three-dimensional waveform display
NASA Astrophysics Data System (ADS)
Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping
2014-04-01
In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.
A seamless acquisition digital storage oscilloscope with three-dimensional waveform display
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping; School of Electrical and Electronic Engineering, Nanyang Technological University
In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, whichmore » converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.« less
NASA Astrophysics Data System (ADS)
Makahinda, T.
2018-02-01
The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.
On-field measurement trial of 4×128 Gbps PDM-QPSK signals by linear optical sampling
NASA Astrophysics Data System (ADS)
Bin Liu; Wu, Zhichao; Fu, Songnian; Feng, Yonghua; Liu, Deming
2017-02-01
Linear optical sampling is a promising characterization technique for advanced modulation formats, together with digital signal processing (DSP) and software-synchronized algorithm. We theoretically investigate the acquisition of optical sampling, when the high-speed signal under test is either periodic or random. Especially, when the profile of optical sampling pulse is asymmetrical, the repetition frequency of sampling pulse needs careful adjustment in order to obtain correct waveform. Then, we demonstrate on-field measurement trial of commercial four-channel 128 Gbps polarization division multiplexing quadrature phase shift keying (PDM-QPSK) signals with truly random characteristics by self-developed equipment. A passively mode-locked fiber laser (PMFL) with a repetition frequency of 95.984 MHz is used as optical sampling source, meanwhile four balanced photo detectors (BPDs) with 400 MHz bandwidth and four-channel analog-to-digital convertor (ADC) with 1.25 GS/s sampling rate are used for data acquisition. The performance comparison with conventional optical modulation analyzer (OMA) verifies that the self-developed equipment has the advantages of low cost, easy implementation, and fast response.
Kindergarten Teachers' Experience with Reporting Child Abuse in Taiwan
ERIC Educational Resources Information Center
Feng, Jui-Ying; Huang, Tzu-Yi; Wang, Chi-Jen
2010-01-01
Objective: The objectives were to examine factors associated with reporting child abuse among kindergarten teachers in Taiwan based on the Theory of Planned Behavior (TPB). Method: A stratified quota sampling technique was used to randomly select kindergarten teachers in Taiwan. The Child Abuse Intention Report Scale, which includes demographics,…
Artistic Tasks Outperform Nonartistic Tasks for Stress Reduction
ERIC Educational Resources Information Center
Abbott, Kayleigh A.; Shanahan, Matthew J.; Neufeld, Richard W. J.
2013-01-01
Art making has been documented as an effective stress reduction technique. In this between-subjects experimental study, possible mechanisms of stress reduction were examined in a sample of 52 university students randomly assigned to one of four conditions generated by factorially crossing Activity Type (artistic or nonartistic) with Coping…
Mathematical Intelligence and Mathematical Creativity: A Causal Relationship
ERIC Educational Resources Information Center
Tyagi, Tarun Kumar
2017-01-01
This study investigated the causal relationship between mathematical creativity and mathematical intelligence. Four hundred thirty-nine 8th-grade students, age ranged from 11 to 14 years, were included in the sample of this study by random cluster technique on which mathematical creativity and Hindi adaptation of mathematical intelligence test…
Women in University Management: The Nigerian Experience
ERIC Educational Resources Information Center
Abiodun-Oyebanji, Olayemi; Olaleye, F.
2011-01-01
This study examined women in university management in Nigeria. It was a descriptive research of the survey type. The population of the study comprised all the public universities in southwest Nigeria, out of which three were selected through the stratified random sampling technique. Three hundred respondents who were in management positions were…
Assessing Principals' Quality Assurance Strategies in Osun State Secondary Schools, Nigeria
ERIC Educational Resources Information Center
Fasasi, Yunus Adebunmi; Oyeniran, Saheed
2014-01-01
This paper examined principals' quality assurance strategies in secondary schools in Osun State, Nigeria. The study adopted a descriptive survey research design. Stratified random sampling technique was used to select 10 male and 10 female principals, and 190 male and190 female teachers. "Secondary School Principal Quality Assurance…
Continuing Dental Education Needs Assessment: A Regional Survey.
ERIC Educational Resources Information Center
Young, Lynda J.; Rudney, Joel D.
1991-01-01
From a random sample of 650 dentists in 6 states, 357 responses indicated preference for 1-day, Friday or Saturday programs, and lecture more than participation or demonstration. Most would like to learn more specialty and new techniques. Dentists averaged 50 hours of continuing education per year. (SK)
The 15-Second Television Commercial: A Study of Executive Perception.
ERIC Educational Resources Information Center
Asahina, Roberta R.
An exploratory study examined the perceptions of creative directors and broadcast production managers in advertising agencies regarding the perceived effects of the 15-second commercial upon creative formats and production techniques. A sample of 600 randomly selected advertising executives and managers were surveyed using a 55-item mailed…
[A comparison of convenience sampling and purposive sampling].
Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien
2014-06-01
Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Iaroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
Scott, J.C.
1990-01-01
Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.
Tee, G H; Moody, A H; Cooke, A H; Chiodini, P L
1993-01-01
AIM--To compare the use of commercial monoclonal antibody test systems--the Giardia CEL IF test and the Crypto CEL IF test--for the detection of Giardia lamblia and Cryptosporidium parvum antigens in faeces with conventional techniques. METHODS--Sensitivity and specificity were evaluated using preparations of cysts of G lamblia and purified oocysts of C parvum. Evaluation of 59 random faecal samples passing through the Department of Clinical Parasitology, Hospital for Tropical Diseases, London, was carried out for both organisms. RESULTS--The fluorescence staining techniques proved more sensitive than other tests routinely used for diagnosis. PMID:8331181
Probability of coincidental similarity among the orbits of small bodies - I. Pairing
NASA Astrophysics Data System (ADS)
Jopek, Tadeusz Jan; Bronikowska, Małgorzata
2017-09-01
Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.
Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J
2015-12-01
In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. (c) 2015 APA, all rights reserved).
Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J.
2016-01-01
In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. PMID:26389526
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less
Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho
2015-04-01
Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening methods may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first apply the sampling techniques and dimension reduction method to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble methods, and cost-sensitive methods) to predict the risk. The cost-sensitive method with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive method with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
ERIC Educational Resources Information Center
Afam, Clifford C.
2012-01-01
Using a correlational, cross-sectional study design with self-administered questionnaires, this study explored the extent to which leadership practices of deans and department heads influence faculty job satisfaction in baccalaureate degree nursing programs. Using a simple random sampling technique, the study survey was sent to 400 faculty…
Improving EFL Learners' Pronunciation of English through Quiz-Demonstration-Practice-Revision (QDPR)
ERIC Educational Resources Information Center
Moedjito
2018-01-01
This study investigates the effectiveness of Quiz-Demonstration-Practice-Revision (QDPR) in improving EFL learners' pronunciation of English. To achieve the goal, the present researcher conducted a one-group pretest-posttest design. The experimental group was selected using a random sampling technique with consideration of the inclusion criteria.…
Science Teachers' Information Processing Behaviours in Nepal: A Reflective Comparative Study
ERIC Educational Resources Information Center
Acharya, Kamal Prasad
2017-01-01
This study examines the investigation of the information processing behaviours of secondary level science teachers. It is based on the data collected from 50 secondary level school science teachers working in Kathmandy valley. The simple random sampling and the Cognitive Style Inventory have been used respectively as the technique and tool to…
Behavioural Problems of Juvenile Street Hawkers in Uyo Metropolis, Nigeria
ERIC Educational Resources Information Center
Udoh, Nsisong A.; Joseph, Eme U.
2012-01-01
The study sought the opinions of Faculty of Education Students of University of Uyo on the behavioural problems of juvenile street hawkers in Uyo metropolis. Five research hypotheses were formulated to guide the study. This cross-sectional survey employed multi-stage random sampling technique in selecting 200 regular undergraduate students in the…
Relationship between Study Habits and Test Anxiety of Higher Secondary Students
ERIC Educational Resources Information Center
Lawrence, Arul A. S.
2014-01-01
The present study aims to probe the relationship between study habits and test anxiety of higher secondary students. In this normative study survey method was employed. The population for the present study consisted of higher secondary students studying in Tirunelveli district. The investigator used the simple random sampling technique. The sample…
ERIC Educational Resources Information Center
Apaak, Daniel; Sarpong, Emmanuel Osei
2015-01-01
This paper examined internal challenges affecting academic performance of student-athletes in Ghanaian public universities, using a descriptive survey research design. Proportionate random sampling technique was employed to select Three Hundred and Thirty-Two (332) respondents for the study. The instrument used in gathering data for the study was…
Primary Teacher Trainees Preparedness to Teach Science: A Gender Perspective
ERIC Educational Resources Information Center
Mutisya, Sammy M.
2015-01-01
The purpose of this study was to determine Primary Teacher Education (PTE) Trainees' perceptions regarding their preparedness to teach science in primary schools. A descriptive survey research design was used and stratified proportionate random sampling techniques used to select 177 males and 172 females. The study found out that more male trainee…
An Investigation on Secondary School Students' Attitude towards Science in Ogun State, Nigeria
ERIC Educational Resources Information Center
Sakariyau, A. O.; Taiwo, Michael O.; Ajagbe, Olalere W.
2016-01-01
The study investigated the attitudes of secondary school students towards science in Odeda Local Government Area of Ogun State, Nigeria. Two hundred senior secondary school students consisting of 84 males and 116 females were selected from five secondary schools using stratified random sampling techniques. A 20-item Attitude to Science…
ERIC Educational Resources Information Center
Okoza, Jolly; Aluede, Oyaziwo; Owens-Sogolo, Osasere
2013-01-01
This study examined metacognitive awareness of learning strategies among Secondary School Students in Edo State, Nigeria. The study was an exploratory one, which utilized descriptive statistics. A total number of 1200 students drawn through multistage proportionate random sampling technique participated in the study. The study found that secondary…
Conflict Resolution Strategies in Non-Government Secondary Schools in Benue State, Nigeria
ERIC Educational Resources Information Center
Oboegbulem, Angie; Alfa, Idoko Alphonusu
2013-01-01
This study investigated perceived CRSs (conflict resolution strategies) for the resolution of conflicts in non-government secondary schools in Benue State, Nigeria. Three research questions and three hypotheses guided this study. Proportionate stratified random sampling technique was used in drawing 15% of the population which gave a total of 500…
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.
1987-01-01
A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.
Williams, M S; Ebel, E D; Cao, Y
2013-01-01
The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
NASA Astrophysics Data System (ADS)
Ortega-Quijano, Noé; Fade, Julien; Roche, Muriel; Parnet, François; Alouini, Mehdi
2016-04-01
Polarimetric sensing by orthogonality breaking has been recently proposed as an alternative technique for performing direct and fast polarimetric measurements using a specific dual-frequency dual-polarization (DFDP) source. Based on the instantaneous Stokes-Mueller formalism to describe the high-frequency evolution of the DFDP beam intensity, we thoroughly analyze the interaction of such a beam with birefringent, dichroic and depolarizing samples. This allows us to confirm that orthogonality breaking is produced by the sample diattenuation, whereas this technique is immune to both birefringence and diagonal depolarization. We further analyze the robustness of this technique when polarimetric sensing is performed through a birefringent waveguide, and the optimal DFDP source configuration for fiber-based endoscopic measurements is subsequently identified. Finally, we consider a stochastic depolarization model based on an ensemble of random linear diattenuators, which makes it possible to understand the progressive vanishing of the detected orthogonality breaking signal as the spatial heterogeneity of the sample increases, thus confirming the insensitivity of this method to diagonal depolarization. The fact that the orthogonality breaking signal is exclusively due to the sample dichroism is an advantageous feature for the precise decoupled characterization of such an anisotropic parameter in samples showing several simultaneous effects.
Aumeran, C.; Thibert, E.; Chapelle, F. A.; Hennequin, C.; Lesens, O.
2012-01-01
Opinions differ on the value of microbiological testing of endoscopes, which varies according to the technique used. We compared the efficacy on bacterial biofilms of sampling solutions used for the surveillance of the contamination of endoscope channels. To compare efficacy, we used an experimental model of a 48-h Pseudomonas biofilm grown on endoscope internal tubing. Sampling of this experimental biofilm was performed with a Tween 80-lecithin-based solution, saline, and sterile water. We also performed a randomized prospective study during routine clinical practice in our hospital sampling randomly with two different solutions the endoscopes after reprocessing. Biofilm recovery expressed as a logarithmic ratio of bacteria recovered on bacteria initially present in biofilm was significantly more effective with the Tween 80-lecithin-based solution than with saline solution (P = 0.002) and sterile water (P = 0.002). There was no significant difference between saline and sterile water. In the randomized clinical study, the rates of endoscopes that were contaminated with the Tween 80-lecithin-based sampling solution and the saline were 8/25 and 1/25, respectively (P = 0.02), and the mean numbers of bacteria recovered were 281 and 19 CFU/100 ml (P = 0.001), respectively. In conclusion, the efficiency and therefore the value of the monitoring of endoscope reprocessing by microbiological cultures is dependent on the sampling solutions used. A sampling solution with a tensioactive action is more efficient than saline in detecting biofilm contamination of endoscopes. PMID:22170930
Xu, Jiao; Zhang, Juan; Wang, Xue-Qiang; Wang, Xuan-Lin; Wu, Ya; Chen, Chan-Cheng; Zhang, Han-Yu; Zhang, Zhi-Wan; Fan, Kai-Yi; Zhu, Qiang; Deng, Zhi-Wei
2017-12-01
Total knee arthroplasty (TKA) has become the most preferred procedure by patients for the relief of pain caused by knee osteoarthritis. TKA patients aim a speedy recovery after the surgery. Joint mobilization techniques for rehabilitation have been widely used to relieve pain and improve joint mobility. However, relevant randomized controlled trials showing the curative effect of these techniques remain lacking to date. Accordingly, this study aims to investigate whether joint mobilization techniques are valid for primary TKA. We will manage a single-blind, prospective, randomized, controlled trial of 120 patients with unilateral TKA. Patients will be randomized into an intervention group, a physical modality therapy group, and a usual care group. The intervention group will undergo joint mobilization manipulation treatment once a day and regular training twice a day for a month. The physical modality therapy group will undergo physical therapy once a day and regular training twice a day for a month. The usual care group will perform regular training twice a day for a month. Primary outcome measures will be based on the visual analog scale, the knee joint Hospital for Special Surgery score, range of motion, surrounded degree, and adverse effect. Secondary indicators will include manual muscle testing, 36-Item Short Form Health Survey, Berg Balance Scale function evaluation, Pittsburgh Sleep Quality Index, proprioception, and muscle morphology. We will direct intention-to-treat analysis if a subject withdraws from the trial. The important features of this trial for joint mobilization techniques in primary TKA are randomization procedures, single-blind, large sample size, and standardized protocol. This study aims to investigate whether joint mobilization techniques are effective for early TKA patients. The result of this study may serve as a guide for TKA patients, medical personnel, and healthcare decision makers. It has been registered at http://www.chictr.org.cn/showproj.aspx?proj=15262 (Identifier:ChiCTR-IOR-16009192), Registered 11 September 2016. We also could provide the correct URL of the online registry in the WHO Trial Registration. http://apps.who.int/trialsearch/Trial2.aspx?TrialID=ChiCTR-IOR-16009192.
NASA Astrophysics Data System (ADS)
Trigila, Alessandro; Iadanza, Carla; Esposito, Carlo; Scarascia-Mugnozza, Gabriele
2015-11-01
The aim of this work is to define reliable susceptibility models for shallow landslides using Logistic Regression and Random Forests multivariate statistical techniques. The study area, located in North-East Sicily, was hit on October 1st 2009 by a severe rainstorm (225 mm of cumulative rainfall in 7 h) which caused flash floods and more than 1000 landslides. Several small villages, such as Giampilieri, were hit with 31 fatalities, 6 missing persons and damage to buildings and transportation infrastructures. Landslides, mainly types such as earth and debris translational slides evolving into debris flows, were triggered on steep slopes and involved colluvium and regolith materials which cover the underlying metamorphic bedrock. The work has been carried out with the following steps: i) realization of a detailed event landslide inventory map through field surveys coupled with observation of high resolution aerial colour orthophoto; ii) identification of landslide source areas; iii) data preparation of landslide controlling factors and descriptive statistics based on a bivariate method (Frequency Ratio) to get an initial overview on existing relationships between causative factors and shallow landslide source areas; iv) choice of criteria for the selection and sizing of the mapping unit; v) implementation of 5 multivariate statistical susceptibility models based on Logistic Regression and Random Forests techniques and focused on landslide source areas; vi) evaluation of the influence of sample size and type of sampling on results and performance of the models; vii) evaluation of the predictive capabilities of the models using ROC curve, AUC and contingency tables; viii) comparison of model results and obtained susceptibility maps; and ix) analysis of temporal variation of landslide susceptibility related to input parameter changes. Models based on Logistic Regression and Random Forests have demonstrated excellent predictive capabilities. Land use and wildfire variables were found to have a strong control on the occurrence of very rapid shallow landslides.
ANALYSIS OF SAMPLING TECHNIQUES FOR IMBALANCED DATA: AN N=648 ADNI STUDY
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer’s disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and under sampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1). a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2). sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. PMID:24176869
Failure rate of inferior alveolar nerve block among dental students and interns
AlHindi, Maryam; Rashed, Bayan; AlOtaibi, Noura
2016-01-01
Objectives: To report the failure rate of inferior alveolar nerve block (IANB) among dental students and interns, causes of failure, investigate awareness of different IANB techniques, and to report IANB-associated complications. Methods: A 3-page questionnaire containing 13 questions was distributed to a random sample of 350 third to fifth years students and interns at the College of Dentistry, King Saud University, Riyadh, Saudi Arabia on January 2011. It included demographic questions (age, gender, and academic level) and questions on IANB failure frequency and reasons, actions taken to overcome the failure, and awareness of different anesthetic techniques, supplementary techniques, and complications. Results: Of the 250 distributed questionnaires, 238 were returned (68% response rate). Most (85.7%) of surveyed sample had experienced IANB failure once or twice. The participants attributed the failures most commonly (66.45%) to anatomical variations. The most common alternative technique used was intraligamentary injection (57.1%), although 42.8% of the sample never attempted any alternatives. Large portion of the samples stated that they either lacked both knowledge of and training for other techniques (44.9%), or that they had knowledge of them but not enough training to perform them (45.8%). Conclusion: To decrease IANB failure rates for dental students and interns, knowledge of landmarks, anatomical variation and their training in alternatives to IANB, such as the Gow-Gates and Akinosi techniques, both theoretically and clinically in the dental curriculum should be enhanced. PMID:26739980
Failure rate of inferior alveolar nerve block among dental students and interns.
AlHindi, Maryam; Rashed, Bayan; AlOtaibi, Noura
2016-01-01
To report the failure rate of inferior alveolar nerve block (IANB) among dental students and interns, causes of failure, investigate awareness of different IANB techniques, and to report IANB-associated complications. A 3-page questionnaire containing 13 questions was distributed to a random sample of 350 third to fifth years students and interns at the College of Dentistry, King Saud University, Riyadh, Saudi Arabia on January 2011. It included demographic questions (age, gender, and academic level) and questions on IANB failure frequency and reasons, actions taken to overcome the failure, and awareness of different anesthetic techniques, supplementary techniques, and complications. Of the 250 distributed questionnaires, 238 were returned (68% response rate). Most (85.7%) of surveyed sample had experienced IANB failure once or twice. The participants attributed the failures most commonly (66.45%) to anatomical variations. The most common alternative technique used was intraligamentary injection (57.1%), although 42.8% of the sample never attempted any alternatives. Large portion of the samples stated that they either lacked both knowledge of and training for other techniques (44.9%), or that they had knowledge of them but not enough training to perform them (45.8%). To decrease IANB failure rates for dental students and interns, knowledge of landmarks, anatomical variation and their training in alternatives to IANB, such as the Gow-Gates and Akinosi techniques, both theoretically and clinically in the dental curriculum should be enhanced.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wild, M.; Rouhani, S.
1995-02-01
A typical site investigation entails extensive sampling and monitoring. In the past, sampling plans have been designed on purely ad hoc bases, leading to significant expenditures and, in some cases, collection of redundant information. In many instances, sampling costs exceed the true worth of the collected data. The US Environmental Protection Agency (EPA) therefore has advocated the use of geostatistics to provide a logical framework for sampling and analysis of environmental data. Geostatistical methodology uses statistical techniques for the spatial analysis of a variety of earth-related data. The use of geostatistics was developed by the mining industry to estimate oremore » concentrations. The same procedure is effective in quantifying environmental contaminants in soils for risk assessments. Unlike classical statistical techniques, geostatistics offers procedures to incorporate the underlying spatial structure of the investigated field. Sample points spaced close together tend to be more similar than samples spaced further apart. This can guide sampling strategies and determine complex contaminant distributions. Geostatistic techniques can be used to evaluate site conditions on the basis of regular, irregular, random and even spatially biased samples. In most environmental investigations, it is desirable to concentrate sampling in areas of known or suspected contamination. The rigorous mathematical procedures of geostatistics allow for accurate estimates at unsampled locations, potentially reducing sampling requirements. The use of geostatistics serves as a decision-aiding and planning tool and can significantly reduce short-term site assessment costs, long-term sampling and monitoring needs, as well as lead to more accurate and realistic remedial design criteria.« less
A k-Vector Approach to Sampling, Interpolation, and Approximation
NASA Astrophysics Data System (ADS)
Mortari, Daniele; Rogers, Jonathan
2013-12-01
The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.
ERIC Educational Resources Information Center
Khanehkeshi, Ali; Basavarajappa
2011-01-01
This paper investigates the relationship of academic stress with aggression, depression and academic performance of college students. Using a random sampling technique, 60 students consist of boys and girls were selected as students having academic stress. The scale for assessing academic stress (Sinha, Sharma and Mahendra, 2001); the Buss-Perry…
Marital and Procreative Projections of Rural Louisiana Youth: A Historical Comparison.
ERIC Educational Resources Information Center
Smith, Kevin B.; Ohlendorf, George W.
Changes in marital and procreative projections among rural Louisiana high school youth between 1968 and 1972 were examined. In 1968 a proportionate, stratified, random cluster sampling technique was employed to secure data on seniors from 13 white and 7 black high schools. In 1972 public school integration and the establishment of private schools…
Causal Factors Influencing Adversity Quotient of Twelfth Grade and Third-Year Vocational Students
ERIC Educational Resources Information Center
Pangma, Rachapoom; Tayraukham, Sombat; Nuangchalerm, Prasart
2009-01-01
Problem statement: The aim of this research was to study the causal factors influencing students' adversity between twelfth grade and third-year vocational students in Sisaket province, Thailand. Six hundred and seventy two of twelfth grade and 376 third-year vocational students were selected by multi-stage random sampling techniques. Approach:…
Prevalence, Causes and Effects of Bullying in Tertiary Institutions in Cross River State, Nigeria
ERIC Educational Resources Information Center
Ada, Mary Juliana; Okoli, Georgina; Obeten, Okoi Okorn; Akeke, M. N. G.
2016-01-01
This research is an evaluation of the impact of causes, consequences and effects of bullying in academic setting on student academic performance in tertiary institutions in Cross River State, Nigeria. The research made use of purposive and random sampling techniques made up of 302 students. Questionnaire served as the data collection instrument.…
ERIC Educational Resources Information Center
Igun, Sylvester Nosakhare
2008-01-01
The study examined Extrinsic motivation as correlates of work attitude of the Nigeria Police Force and its implications for counselling. 300 Police personnel were selected by random sampling technique from six departments that make up police force Headquarters, Abuja. The personnel were selected from each department using simple sampling…
The Effect of Cluster-Based Instruction on Mathematic Achievement in Inclusive Schools
ERIC Educational Resources Information Center
Gunarhadi, Sunardi; Anwar, Mohammad; Andayani, Tri Rejeki; Shaari, Abdull Sukor
2016-01-01
The research aimed to investigate the effect of Cluster-Based Instruction (CBI) on the academic achievement of Mathematics in inclusive schools. The sample was 68 students in two intact classes, including those with learning disabilities, selected using a cluster random technique among 17 inclusive schools in the regency of Surakarta. The two…
ERIC Educational Resources Information Center
Fabunmi, Martins; Erwat, Eseza Akiror
2008-01-01
This study investigated through empirical methods the extent to which information acquisition and information management capacity of administrators in tertiary institutions in South-Western Nigeria contributed to their decision-making effectiveness. It adopted the ex post facto survey research design, using the random sampling technique to select…
Coping with Resource Management Challenges in Mumias Sub-County, Kakamega County, Kenya
ERIC Educational Resources Information Center
Anyango, Onginjo Rose; Orodho, John Aluko
2016-01-01
The gist of the study was to examine the main coping strategies used to manage resources in public secondary schools in Mumias Sub-County, Kakamega County, Kenya. The study was premised on Hunts (2007) theory on project management. A descriptive survey design was adopted. A combination of purposive and simple random sampling techniques were used…
ERIC Educational Resources Information Center
Alimi, Olatunji Sabitu; Ehinola, Gabriel Babatunde; Alabi, Festus Oluwole
2012-01-01
The study investigated the influence of school types and facilities on students' academic performance in Ondo State. It was designed to find out whether facilities and students' academic performance are related in private and public secondary schools respectively. Descriptive survey design was used. Proportionate random sampling technique was used…
Students' and Human Rights Awareness in Secondary Schools' Environment in Delta State
ERIC Educational Resources Information Center
Akiri, Agharuwhe A.
2013-01-01
The paper reviewed the concept of human rights, possible origin and relevance to human society in general and the school system in particular. It evaluated people's level of awareness of these rights amongst students and teachers of secondary schools in Delta Central Senatorial District. The stratified random sampling technique was adopted to…
Determinants of Differing Teacher Attitudes towards Inclusive Education Practice
ERIC Educational Resources Information Center
Gyimah, Emmanuel K.; Ackah, Francis R., Jr.; Yarquah, John A.
2010-01-01
An examination of literature reveals that teacher attitude is fundamental to the practice of inclusive education. In order to verify the extent to which the assertion is applicable in Ghana, 132 teachers were selected from 16 regular schools in the Cape Coast Metropolis using purposive and simple random sampling techniques to respond to a four…
A preclustering-based ensemble learning technique for acute appendicitis diagnoses.
Lee, Yen-Hsien; Hu, Paul Jen-Hwa; Cheng, Tsang-Hsiang; Huang, Te-Chia; Chuang, Wei-Yao
2013-06-01
Acute appendicitis is a common medical condition, whose effective, timely diagnosis can be difficult. A missed diagnosis not only puts the patient in danger but also requires additional resources for corrective treatments. An acute appendicitis diagnosis constitutes a classification problem, for which a further fundamental challenge pertains to the skewed outcome class distribution of instances in the training sample. A preclustering-based ensemble learning (PEL) technique aims to address the associated imbalanced sample learning problems and thereby support the timely, accurate diagnosis of acute appendicitis. The proposed PEL technique employs undersampling to reduce the number of majority-class instances in a training sample, uses preclustering to group similar majority-class instances into multiple groups, and selects from each group representative instances to create more balanced samples. The PEL technique thereby reduces potential information loss from random undersampling. It also takes advantage of ensemble learning to improve performance. We empirically evaluate this proposed technique with 574 clinical cases obtained from a comprehensive tertiary hospital in southern Taiwan, using several prevalent techniques and a salient scoring system as benchmarks. The comparative results show that PEL is more effective and less biased than any benchmarks. The proposed PEL technique seems more sensitive to identifying positive acute appendicitis than the commonly used Alvarado scoring system and exhibits higher specificity in identifying negative acute appendicitis. In addition, the sensitivity and specificity values of PEL appear higher than those of the investigated benchmarks that follow the resampling approach. Our analysis suggests PEL benefits from the more representative majority-class instances in the training sample. According to our overall evaluation results, PEL records the best overall performance, and its area under the curve measure reaches 0.619. The PEL technique is capable of addressing imbalanced sample learning associated with acute appendicitis diagnosis. Our evaluation results suggest PEL is less biased toward a positive or negative class than the investigated benchmark techniques. In addition, our results indicate the overall effectiveness of the proposed technique, compared with prevalent scoring systems or salient classification techniques that follow the resampling approach. Copyright © 2013 Elsevier B.V. All rights reserved.
Discriminant forest classification method and system
Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.
2012-11-06
A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.
As-built design specification for proportion estimate software subsystem
NASA Technical Reports Server (NTRS)
Obrien, S. (Principal Investigator)
1980-01-01
The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.
Thoma, Nathan C; Cecero, John J
2009-12-01
This study sought to investigate the extent to which therapists endorse techniques outside of their self-identified orientation and which techniques are endorsed across orientations. A survey consisting of 127 techniques from 8 major theories of psychotherapy was administered via U.S. mail to a national random sample of doctoral-level psychotherapy practitioners. The 201 participants endorsed substantial numbers of techniques from outside their respective orientations. Many of these techniques were quite different from those of the core theories of the respective orientations. Further examining when and why experienced practitioners switch to techniques outside their primary orientation may help reveal where certain techniques fall short and where others excel, indicating a need for further research that taps the collective experience of practitioners. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Improved sampling and analysis of images in corneal confocal microscopy.
Schaldemose, E L; Fontain, F I; Karlsson, P; Nyengaard, J R
2017-10-01
Corneal confocal microscopy (CCM) is a noninvasive clinical method to analyse and quantify corneal nerve fibres in vivo. Although the CCM technique is in constant progress, there are methodological limitations in terms of sampling of images and objectivity of the nerve quantification. The aim of this study was to present a randomized sampling method of the CCM images and to develop an adjusted area-dependent image analysis. Furthermore, a manual nerve fibre analysis method was compared to a fully automated method. 23 idiopathic small-fibre neuropathy patients were investigated using CCM. Corneal nerve fibre length density (CNFL) and corneal nerve fibre branch density (CNBD) were determined in both a manual and automatic manner. Differences in CNFL and CNBD between (1) the randomized and the most common sampling method, (2) the adjusted and the unadjusted area and (3) the manual and automated quantification method were investigated. The CNFL values were significantly lower when using the randomized sampling method compared to the most common method (p = 0.01). There was not a statistical significant difference in the CNBD values between the randomized and the most common sampling method (p = 0.85). CNFL and CNBD values were increased when using the adjusted area compared to the standard area. Additionally, the study found a significant increase in the CNFL and CNBD values when using the manual method compared to the automatic method (p ≤ 0.001). The study demonstrated a significant difference in the CNFL values between the randomized and common sampling method indicating the importance of clear guidelines for the image sampling. The increase in CNFL and CNBD values when using the adjusted cornea area is not surprising. The observed increases in both CNFL and CNBD values when using the manual method of nerve quantification compared to the automatic method are consistent with earlier findings. This study underlines the importance of improving the analysis of the CCM images in order to obtain more objective corneal nerve fibre measurements. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Compressive sensing based wireless sensor for structural health monitoring
NASA Astrophysics Data System (ADS)
Bao, Yuequan; Zou, Zilong; Li, Hui
2014-03-01
Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.
NASA Astrophysics Data System (ADS)
Alyassin, Abdal M.
2002-05-01
3D Digital mammography (3DDM) is a new technology that provides high resolution X-ray breast tomographic data. Like any other tomographic medical imaging modalities, viewing a stack of tomographic images may require time especially if the images are of large matrix size. In addition, it may cause difficulty to conceptually construct 3D breast structures. Therefore, there is a need to readily visualize the data in 3D. However, one of the issues that hinder the usage of volume rendering (VR) is finding an automatic way to generate transfer functions that efficiently map the important diagnostic information in the data. We have developed a method that randomly samples the volume. Based on the mean and the standard deviation of these samples, the technique determines the lower limit and upper limit of a piecewise linear ramp transfer function. We have volume rendered several 3DDM data using this technique and compared visually the outcome with the result from a conventional automatic technique. The transfer function generated through the proposed technique provided superior VR images over the conventional technique. Furthermore, the improvement in the reproducibility of the transfer function correlated with the number of samples taken from the volume at the expense of the processing time.
Bang, Ji Young; Navaneethan, Udayakumar; Hasan, Muhammad K; Hawes, Robert; Varadarajulu, Shyam
2018-03-11
Outcomes of endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) evaluation vary with technique, needles, and methods of specimen evaluation. We performed a direct comparison of diagnostic yields of EUS-FNA samples collected using different gauge needles (22- vs 25-gauge), with or without suction. We performed a randomized controlled study of 352 patients with suspected pancreatic masses, referred for EUS-FNA at a tertiary referral center. Patients were randomly assigned to 22-gauge needles with or without suction or 25-gauge needles with or without suction. Specimens were evaluated offsite by cell block and rapid onsite cytologic evaluation (ROSE). Final diagnoses were made based on histologic analyses or 12-month follow-up evaluations. The primary outcome was diagnostic adequacy of cell blocks. Secondary outcomes were operating characteristics of ROSE and EUS-FNA, number of passes required for accurate onsite diagnosis, and amount of blood in specimens. The final diagnoses were malignancy (81.5% of patients) and benign disease (17.0% of patients); 1.4% of patients were lost during follow up. Cell block, ROSE, and EUS-FNA led to diagnostic accuracies of 71.9%, 95.5%, and 96.6%, respectively. A 22-gauge needle with suction was associated with more passes for adequate onsite diagnosis (P = .003) and specimens contained more blood (P = .01). Diagnostic accuracy of specimens collected by transduodenal EUS-FNA was lower with 22-gauge needles with suction compared to other techniques (P = .004). In a randomized trial of patients undergoing EUS-FNA for pancreatic masses, samples collected with 22-gauge vs 25-gauge needles performed equally well for offsite specimen evaluation. Use of suction appears to increase number of passes needed and specimen bloodiness. Specimen collection techniques should be individualized based on method of evaluation. ClinicalTrials.gov no: NCT02424838. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Novel application of the MSSCP method in biodiversity studies.
Tomczyk-Żak, Karolina; Kaczanowski, Szymon; Górecka, Magdalena; Zielenkiewicz, Urszula
2012-02-01
Analysis of 16S rRNA sequence diversity is widely performed for characterizing the biodiversity of microbial samples. The number of determined sequences has a considerable impact on complete results. Although the cost of mass sequencing is decreasing, it is often still too high for individual projects. We applied the multi-temperature single-strand conformational polymorphism (MSSCP) method to decrease the number of analysed sequences. This was a novel application of this method. As a control, the same sample was analysed using random sequencing. In this paper, we adapted the MSSCP technique for screening of unique sequences of the 16S rRNA gene library and bacterial strains isolated from biofilms growing on the walls of an ancient gold mine in Poland and determined whether the results obtained by both methods differed and whether random sequencing could be replaced by MSSCP. Although it was biased towards the detection of rare sequences in the samples, the qualitative results of MSSCP were not different than those of random sequencing. Unambiguous discrimination of unique clones and strains creates an opportunity to effectively estimate the biodiversity of natural communities, especially in populations which are numerous but species poor. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Grzech-Leśniak, Kinga; Matys, Jacek; Jurczyszyn, Kamil; Ziółkowski, Piotr; Dominiak, Marzena; Brugnera Junior, Aldo; Romeo, Umberto
2018-06-01
The purpose of this study was histological and thermometric examination of soft tissue de-epithelialization using digitally controlled laser handpiece (DCLH) - X-Runner. Commonly used techniques for de-epithelialization include scalpel, abrasion with diamond bur, or a combination of the two. Despite being simple, inexpensive and effective, these techniques are invasive and may produce unwanted side effects. It is important to look for alternative techniques using novel tools, which are minimally invasive and effective. 114 porcine samples sized 6 × 6 mm were collected from the attached gingiva (AG) of the alveolar process of the mandible using 15C scalpel blade. The samples were irradiated by means of Er:YAG laser (LightWalker, Fotona, Slovenia), using X-Runner and HO 2 handpieces at different parameters; 80, 100, and 140 mJ/20 Hz in time of 6 or 16 sec, respectively. The temperature was measured with a K-type thermocouple. For the histopathological analysis of efficiency of epithelium removal and thermal injury, 3 random samples were de-epithelialized with an HO 2 handpiece, and 9 random samples with an X-Runner handpiece with different parameters. For the samples irradiated with DCLH, we have used three different settings, which resulted in removing 1 to 3 layers of the soft tissue. The efficiency of epithelium removal and the rise of temperature were analyzed. DCLH has induced significantly lower temperature increase compared with HO 2 at each energy to frequency ratio. The histological examination revealed total epithelium removal when HO 2 handpiece was used at 100 and 140 mJ/20 Hz and when DCLH was used for two- and threefold lasing at 80, 100, and 140 mJ/20 Hz. Er:YAG laser with DCLH handpiece may be an efficient tool in epithelium removal without excessive thermal damage.
Data in support of the detection of genetically modified organisms (GMOs) in food and feed samples.
Alasaad, Noor; Alzubi, Hussein; Kader, Ahmad Abdul
2016-06-01
Food and feed samples were randomly collected from different sources, including local and imported materials from the Syrian local market. These included maize, barley, soybean, fresh food samples and raw material. GMO detection was conducted by PCR and nested PCR-based techniques using specific primers for the most used foreign DNA commonly used in genetic transformation procedures, i.e., 35S promoter, T-nos, epsps, cryIA(b) gene and nptII gene. The results revealed for the first time in Syria the presence of GM foods and feeds with glyphosate-resistant trait of P35S promoter and NOS terminator in the imported soybean samples with high frequency (5 out of the 6 imported soybean samples). While, tests showed negative results for the local samples. Also, tests revealed existence of GMOs in two imported maize samples detecting the presence of 35S promoter and nos terminator. Nested PCR results using two sets of primers confirmed our data. The methods applied in the brief data are based on DNA analysis by Polymerase Chain Reaction (PCR). This technique is specific, practical, reproducible and sensitive enough to detect up to 0.1% GMO in food and/or feedstuffs. Furthermore, all of the techniques mentioned are economic and can be applied in Syria and other developing countries. For all these reasons, the DNA-based analysis methods were chosen and preferred over protein-based analysis.
NASA Astrophysics Data System (ADS)
Sudarmin, S.; Selia, E.; Taufiq, M.
2018-03-01
The purpose of this research is to determine the influence of inquiry learning model on additives theme with ethnoscience content to cultural awareness of students and how the students’ responses to learning. The method applied in this research is a quasi-experimental with non-equivalent control group design. The sampling technique applied in this research is the technique of random sampling. The samples were eight grade students of one of junior high schools in Semarang. The results of this research were (1) thestudents’ cultural awareness of the experiment class is better than the control class (2) inquiry learning model with ethnoscience content strongly influencing the cultural awareness of students by 78% and (3) students gave positive responses to inquiry learning model with ethnoscience content. The conclusions of this research are inquiry-learning model with ethnoscience content has positive influence on students’ cultural awareness.
Use of randomized sampling for analysis of metabolic networks.
Schellenberger, Jan; Palsson, Bernhard Ø
2009-02-27
Genome-scale metabolic network reconstructions in microorganisms have been formulated and studied for about 8 years. The constraint-based approach has shown great promise in analyzing the systemic properties of these network reconstructions. Notably, constraint-based models have been used successfully to predict the phenotypic effects of knock-outs and for metabolic engineering. The inherent uncertainty in both parameters and variables of large-scale models is significant and is well suited to study by Monte Carlo sampling of the solution space. These techniques have been applied extensively to the reaction rate (flux) space of networks, with more recent work focusing on dynamic/kinetic properties. Monte Carlo sampling as an analysis tool has many advantages, including the ability to work with missing data, the ability to apply post-processing techniques, and the ability to quantify uncertainty and to optimize experiments to reduce uncertainty. We present an overview of this emerging area of research in systems biology.
Effect of substrate temperature in the synthesis of BN nanostructures
NASA Astrophysics Data System (ADS)
Sajjad, M.; Zhang, H. X.; Peng, X. Y.; Feng, P. X.
2011-06-01
Boron nitride (BN) nanostructures were grown on molybdenum discs at different substrate temperatures using the short-pulse laser plasma deposition technique. Large numbers of randomly oriented nanorods of fiber-like structures were obtained. The variation in the length and diameter of the nanorods as a function of the substrate temperature was systematically studied. The surface morphologies of the samples were studied using scanning electron microscopy. Energy dispersive x-ray spectroscopy confirmed that both the elements boron and nitrogen are dominant in the nanostructure. The x-ray diffraction (XRD) technique was used to analyse BN phases. The XRD peak that appeared at 26° showed the presence of hexagonal BN phase, whereas the peak at 44° was related to cubic BN content in the samples. Raman spectroscopic analysis showed vibrational modes of sp2- and sp3-type bonding in the sample. The Raman spectra agreed well with XRD results.
A Numerical Simulation of Scattering from One-Dimensional Inhomogeneous Dielectric Random Surfaces
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1996-01-01
In this paper, an efficient numerical solution for the scattering problem of inhomogeneous dielectric rough surfaces is presented. The inhomogeneous dielectric random surface represents a bare soil surface and is considered to be comprised of a large number of randomly positioned dielectric humps of different sizes, shapes, and dielectric constants above an impedance surface. Clods with nonuniform moisture content and rocks are modeled by inhomogeneous dielectric humps and the underlying smooth wet soil surface is modeled by an impedance surface. In this technique, an efficient numerical solution for the constituent dielectric humps over an impedance surface is obtained using Green's function derived by the exact image theory in conjunction with the method of moments. The scattered field from a sample of the rough surface is obtained by summing the scattered fields from all the individual humps of the surface coherently ignoring the effect of multiple scattering between the humps. The statistical behavior of the scattering coefficient sigma(sup 0) is obtained from the calculation of scattered fields of many different realizations of the surface. Numerical results are presented for several different roughnesses and dielectric constants of the random surfaces. The numerical technique is verified by comparing the numerical solution with the solution based on the small perturbation method and the physical optics model for homogeneous rough surfaces. This technique can be used to study the behavior of scattering coefficient and phase difference statistics of rough soil surfaces for which no analytical solution exists.
Pseudo-Random Sequence Modifications for Ion Mobility Orthogonal Time of Flight Mass Spectrometry
Clowers, Brian H.; Belov, Mikhail E.; Prior, David C.; Danielson, William F.; Ibrahim, Yehia; Smith, Richard D.
2008-01-01
Due to the inherently low duty cycle of ion mobility spectrometry (IMS) experiments that sample from continuous ion sources, a range of experimental advances have been developed to maximize ion utilization efficiency. The use of ion trapping mechanisms prior to the ion mobility drift tube has demonstrated significant gains over discrete sampling from continuous sources; however, these technologies have traditionally relied upon a signal averaging to attain analytically relevant signal-to-noise ratios (SNR). Multiplexed (MP) techniques based upon the Hadamard transform offer an alternative experimental approach by which ion utilization efficiency can be elevated to ∼ 50 %. Recently, our research group demonstrated a unique multiplexed ion mobility time-of-flight (MP-IMS-TOF) approach that incorporates ion trapping and can extend ion utilization efficiency beyond 50 %. However, the spectral reconstruction of the multiplexed signal using this experiment approach requires the use of sample-specific weighing designs. Though general weighing designs have been shown to significantly enhance ion utilization efficiency using this MP technique, such weighing designs cannot be applied to all samples. By modifying both the ion funnel trap and the pseudo random sequence (PRS) used for the MP experiment we have eliminated the need for complex weighing matrices. For both simple and complex mixtures SNR enhancements of up to 13 were routinely observed as compared to the SA-IMS-TOF experiment. In addition, this new class of PRS provides a two fold enhancement in ion throughput compared to the traditional HT-IMS experiment. PMID:18311942
Network Sampling with Memory: A proposal for more efficient sampling from social networks.
Mouw, Ted; Verdery, Ashton M
2012-08-01
Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.
Network Sampling with Memory: A proposal for more efficient sampling from social networks
Mouw, Ted; Verdery, Ashton M.
2013-01-01
Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246
NASA Astrophysics Data System (ADS)
Bhosale, Parag; Staring, Marius; Al-Ars, Zaid; Berendsen, Floris F.
2018-03-01
Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.
Mascaro, Joseph; Asner, Gregory P; Knapp, David E; Kennedy-Bowdoin, Ty; Martin, Roberta E; Anderson, Christopher; Higgins, Mark; Chadwick, K Dana
2014-01-01
Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag"), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1) when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.
A Tale of Two “Forests”: Random Forest Machine Learning Aids Tropical Forest Carbon Mapping
Mascaro, Joseph; Asner, Gregory P.; Knapp, David E.; Kennedy-Bowdoin, Ty; Martin, Roberta E.; Anderson, Christopher; Higgins, Mark; Chadwick, K. Dana
2014-01-01
Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including—in the latter case—x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called “out-of-bag”), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha−1 when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation. PMID:24489686
Yarmus, Lonny B; Semaan, Roy W; Arias, Sixto A; Feller-Kopman, David; Ortiz, Ricardo; Bösmüller, Hans; Illei, Peter B; Frimpong, Bernice O; Oakjones-Burgess, Karen; Lee, Hans J
2016-08-01
Transbronchial forceps biopsy (FBx) has been the preferred method for obtaining bronchoscopic lung biopsy specimens. Cryoprobe biopsy (CBx) has been shown to obtain larger and higher quality samples, but is limited by its inability to retrieve the sample through the working channel of the bronchoscope, requiring the bronchoscope to leave the airway for sample retrieval. We evaluated a novel device using a sheath cryobiopsy (SCBx). This method allows for specimen retrieval through the working channel of the bronchoscope, with the scope remaining inside the airway. This prospective, randomized controlled, single-blinded porcine study compared a 1.1-mm SCBx probe, a 1.9-mm CBx probe, and 2.0-mm FBx forceps. Assessment of histologic accessibility, sample quantity and quality, number of attempts to acquire and retrieve samples, cryoprobe activation time, fluoroscopy activation time, technical feasibility, and complications were compared. Samples adequate for standard pathologic processing were retrieved with 82.1% of the SCBx specimens, 82.9%% of the CBx specimens, and 30% of the FBx specimens. The histologic accessibility of both SCBx (P = .0002) and CBx (P = .0003) was superior to FBx. Procedure time for FBx was faster than for both SCBx and CBx, but SCBx was significantly faster than CBx (P < .0001). Fluoroscopy time was lower for both SCBx and CBx compared with FBx. There were no significant bleeding events. SCBx is a feasible technique providing a higher quality lung biopsy specimen compared with FBx and can successfully be retrieved through the working channel. Human studies are needed to further assess this technique with additional safety data. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Ali, Abid; Shakil-Ur-Rehman, Syed; Sibtain, Fozia
2014-07-01
To determine the efficacy of Sustained Natural Apophyseal Glides (SNAGs) with and without Isometric Exercise Training Program (IETP) in Non-specific Neck Pain (NSNP) Methods: This randomized control trial of one year duration was conducted at out-patient department of Physiotherapy and Rehabilitation, Khyber Teaching Hospital (KTH) Peshawar, Pakistan from July 2012 to June 2013. The sample of 102 patients of NSNP were randomly selected through simple random sampling technique, and placed into two groups. The SNAGs manual physical therapy technique with IETP was applied on 51 patients in group A and SNAGs manual physical therapy techniques was applied alone on 51 patients in group B. The duration of intervention was 6 weeks, at 4 times per week. The Neck Disability Index (NDI) and Visual Analog Scale (VAS) for neck pain were assessment tools used for all patients before and after 6 weeks of physical therapy intervention. All the patients were assessed through NDI and VAS before intervention and at the completion of 6 weeks program. The data of all 102 was analyzed by SPSS-20 and statistical test was applied at 95% level of significance determine the efficacy of both the treatments interventions and compare with each other. The patients in group A, treated with SNAGs and followed by IETP for 6 weeks, demonstrated more improvement in pain and physical activity as assessed by VAS (p=0.013) and NDI (p=0.003), as compared to the patients treated with SNAGS alone, as pain and function assessed by VAS (p=0.047) and NDI (p=0.164). In group A the NDI score improved from 40 to 15 and VAS from 7 to 4, while in group B the NDI score improved from 42 to 30 and VAS from 7 to 4. Patients with non-specific neck pain treated with SNAGs manual physical therapy techniques and followed by IETP was more effective in reduction of pain and enhancement of function, as compared to those patients treated with SNAGs manual physical therapy techniques alone.
Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören
2016-12-01
Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.
Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study.
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-02-15
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. © 2013 Elsevier Inc. All rights reserved.
Jiang, Qian; Qiu, Yating; Yang, Chi; Yang, Jingyun; Chen, Minjie; Zhang, Zhiyuan
2015-10-01
Impacted third molars are frequently encountered in clinical work. Surgical removal of impacted third molars is often required to prevent clinical symptoms. Traditional rotary cutting instruments are potentially injurious, and piezosurgery, as a new osteotomy technique, has been introduced in oral and maxillofacial surgery. No consistent conclusion has been reached regarding whether this new technique is associated with fewer or less severe postoperative sequelae after third molar extraction.The aim of this study was to compare piezosurgery with rotary osteotomy techniques, with regard to surgery time and the severity of postoperative sequelae, including pain, swelling, and trismus.We conducted a systematic literature search in the Cochrane Library, PubMed, Embase, and Google Scholar.The eligibility criteria of this study included the following: the patients were clearly diagnosed as having impacted mandibular third molars; the patients underwent piezosurgery osteotomy, and in the control group rotary osteotomy techniques, for removing impacted third molars; the outcomes of interest include surgery time, trismus, swelling or pain; the studies are randomized controlled trials.We used random-effects models to calculate the difference in the outcomes, and the corresponding 95% confidence interval. We calculated the weighted mean difference if the trials used the same measurement, and a standardized mean difference if otherwise.A total of seven studies met the eligibility criteria and were included in our analysis. Compared with rotary osteotomy, patients undergoing piezosurgery experienced longer surgery time (mean difference 4.13 minutes, 95% confidence interval 2.75-5.52, P < 0.0001). Patients receiving the piezoelectric technique had less swelling at postoperative days 1, 3, 5, and 7 (all Ps ≤0.023). Additionally, there was a trend of less postoperative pain and trismus in the piezosurgery groups.The number of included randomized controlled trials and the sample size of each trial were relatively small, double blinding was not possible, and cost analysis was unavailable due to a lack of data.Our meta-analysis indicates that although patients undergoing piezosurgery experienced longer surgery time, they had less postoperative swelling, indicating that piezosurgery is a promising alternative technique for extraction of impacted third molars.
Hydration Free Energy from Orthogonal Space Random Walk and Polarizable Force Field.
Abella, Jayvee R; Cheng, Sara Y; Wang, Qiantao; Yang, Wei; Ren, Pengyu
2014-07-08
The orthogonal space random walk (OSRW) method has shown enhanced sampling efficiency in free energy calculations from previous studies. In this study, the implementation of OSRW in accordance with the polarizable AMOEBA force field in TINKER molecular modeling software package is discussed and subsequently applied to the hydration free energy calculation of 20 small organic molecules, among which 15 are positively charged and five are neutral. The calculated hydration free energies of these molecules are compared with the results obtained from the Bennett acceptance ratio method using the same force field, and overall an excellent agreement is obtained. The convergence and the efficiency of the OSRW are also discussed and compared with BAR. Combining enhanced sampling techniques such as OSRW with polarizable force fields is very promising for achieving both accuracy and efficiency in general free energy calculations.
Principals Leadership Styles and Gender Influence on Teachers Morale in Public Secondary Schools
ERIC Educational Resources Information Center
Eboka, Obiajulu Chinyelum
2016-01-01
The study investigated the perception of teachers on the influence of principals' leadership styles and gender on teacher morale. Four research questions and four research hypotheses guided the study. An ex-post facto research design was adopted in the study. Through the simple random sampling technique a total of 72 principals and 2,506 in 72…
Outcomes of Parental Use of Psychological Aggression on Children: A Structural Model from Sri Lanka
ERIC Educational Resources Information Center
de Zoysa, Piyanjali; Newcombe, Peter A.; Rajapakse, Lalini
2010-01-01
The objective of this study was to explore the existence and, if so, the nature of the association between parental use of psychological aggression and psychological maladjustment in a 12-year-old Sri Lankan school population. A stratified random sampling technique was used to select 1,226 children from Colombo district schools. Three instruments,…
ERIC Educational Resources Information Center
Hartono, Edy; Wahyudi, Sugeng; Harahap, Pahlawansjah; Yuniawan, Ahyar
2017-01-01
This study aims to analyze the relationship between lecturers' performance and their teaching competence, measured by antecedent variables of organizational learning and need for achievement. It used the Structure Equation Model as data analysis technique, and the random sampling method to collect data from 207 lecturers of private universities in…
ERIC Educational Resources Information Center
Ndirika, Maryann C.; Njoku, U. J.
2012-01-01
This study was conducted to investigate the home influences on the academic performance of agricultural science secondary school students in Ikwuano Local Government Area of Abia State. The instrument used in data collection was a validated questionnaire structured on a two point rating scale. Simple random sampling technique was used to select…
ERIC Educational Resources Information Center
Guttmacher, Mary Johnson
A case study was conducted using a sample of 271 women selected from a state college by a stratified random cluster technique that approximates proportional representation of women in all four classes and all college majors. The data source was an extensive questionnaire designed to measure the attitudes and behavior of interest. The major…
ERIC Educational Resources Information Center
Ogunyemi, Ajibola O.; Mabekoje, Sesan Ola
2007-01-01
Introduction: This study sought to determine the combined and relative efficacy of self-efficacy, risk-taking behaviour and mental health on personal growth initiative of university undergraduates. Method: The expo-facto research design was used to conduct the study. Stratified random sampling technique was used to select 425 participants from 6…
ERIC Educational Resources Information Center
Momoh, U.; Osagiobare, Emmanuel Osamiro
2015-01-01
The study investigated principals' implementation of quality assurance standards and administrative effectiveness in public secondary schools in Edo and Delta States. To guide the study, four research questions and hypotheses were raised. Descriptive research design was adopted for the study and the simple random sampling technique was used to…
ERIC Educational Resources Information Center
Ahmed, Mulkah Adebisi; Moradeyo, Ismail; Abimbola, Isaac Olakanmi
2016-01-01
The study investigated the Assessment of perceived academic and incentive needs of senior secondary school biology teachers in Kwara State, Nigeria. Stratified random sampling technique was used to select two hundred and fifty (250) biology teachers from the three senatorial district of Kwara State. A questionnaire was prepared, validated and used…
Preliminary report on the ecology of Armillaria in the East Cascades of Oregon
Geral I. McDonald; John W. Hanna; Aaron L. Smith; Helen M. Maffei; Mee-Sook Kim; Amy L. Ross-Davis; Ned B. Klopfenstein
2011-01-01
As part of a larger effort to assess the distribution and ecology of Armillaria species throughout western North America, we present preliminary survey results for the East Cascades of Oregon. Surveys and sampling were conducted on 260 0.04-ha plots, which were randomly located across diverse environments and geographic locations. Using DNA-based techniques for the...
Internet Access and Usage by Secondary School Students in Morogoro Municipality, Tanzania
ERIC Educational Resources Information Center
Tarimo, Ronald; Kavishe, George
2017-01-01
The purpose of this paper was to report results of a study on the investigation of the Internet access and usage by secondary school students in Morogoro municipality in Tanzania. A simple random sampling technique was used to select 120 students from six schools. The data was collected through a questionnaire. A quantitative approach using the…
ERIC Educational Resources Information Center
Tanglang, Nebath; Ibrahim, Aminu Kazeem
2015-01-01
The study adopted an ex-post facto research design. Randomization sampling technique was used to select 346 undergraduate distance learners and the learners were grouped into four, High and Low Goal setter learners and High and Low Decision-making skills learners. The instruments for data collection were Undergraduate Academic Goal Setting Scale…
ERIC Educational Resources Information Center
Jekayinfa, Alice Arinlade; Yusuf, Abdul Raheem
2008-01-01
This paper presents the report of a research carried out in Kwara Sate of Nigeria to seek the opinions of teachers on the incorporation of Environmental Education (EE) in the Nigerian Primary School Curriculum. The descriptive survey method was employed for the study. 200 teachers were selected through stratified random sampling technique to cater…
ERIC Educational Resources Information Center
Nwafor, Chika E.; Obodo, Abigail Chikaodinaka; Okafor, Gabriel
2015-01-01
This study explored the effect of self-regulated learning approach on junior secondary school students' achievement in basic science. Quasi-experimental design was used for the study.Two co-educational schools were drawn for the study through simple random sampling technique. One school was assigned to the treatment group while the other was…
ERIC Educational Resources Information Center
Lawal-Adebowale, O. A.; Oyekunle, O.
2014-01-01
With integration of information technology tool for academic course registration in the Federal University of Agriculture, Abeokuta, the study assessed the agro-students' appraisal of the online tool for course registration. A simple random sampling technique was used to select 325 agrostudents; and validated and reliable questionnaire was used…
ERIC Educational Resources Information Center
Raines, Roy H.
A random sample (n=25) of full-time faculty at Manatee Junior College (Florida) were surveyed by open-ended questionnaire to determine what instructional techniques were being used and to ascertain if the faculty had acquired minimal training in teaching methods and learning theories. A total of 16 different teaching strategies were identified. Of…
ERIC Educational Resources Information Center
Suleiman, Habiba
2016-01-01
The aim of this research paper is to assess the implementation of Universal basic Education Programme in Nigeria from 1999-2009 on availability of personnel resources. A descriptive and survey method was adopted for the investigations, through random sampling technique, two (2) States each were selected from the six geopolitical zones of Nigeria…
Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.
2014-01-01
Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504
Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H
2014-11-01
Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
Enhancing students’ mathematical problem posing skill through writing in performance tasks strategy
NASA Astrophysics Data System (ADS)
Kadir; Adelina, R.; Fatma, M.
2018-01-01
Many researchers have studied the Writing in Performance Task (WiPT) strategy in learning, but only a few paid attention on its relation to the problem-posing skill in mathematics. The problem-posing skill in mathematics covers problem reformulation, reconstruction, and imitation. The purpose of the present study was to examine the effect of WiPT strategy on students’ mathematical problem-posing skill. The research was conducted at a Public Junior Secondary School in Tangerang Selatan. It used a quasi-experimental method with randomized control group post-test. The samples were 64 students consists of 32 students of the experiment group and 32 students of the control. A cluster random sampling technique was used for sampling. The research data were obtained by testing. The research shows that the problem-posing skill of students taught by WiPT strategy is higher than students taught by a conventional strategy. The research concludes that the WiPT strategy is more effective in enhancing the students’ mathematical problem-posing skill compared to the conventional strategy.
Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.
Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří
2017-11-10
We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.
A new simple technique for improving the random properties of chaos-based cryptosystems
NASA Astrophysics Data System (ADS)
Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.
2018-03-01
A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.
NASA Astrophysics Data System (ADS)
Sri Purnami, Agustina; Adi Widodo, Sri; Charitas Indra Prahmana, Rully
2018-01-01
This study aimed to know the improvement of achievement and motivation of learning mathematics by using Team Accelerated Instruction. The research method used was the experiment with descriptive pre-test post-test experiment. The population in this study was all students of class VIII junior high school in Jogjakarta. The sample was taken using cluster random sampling technique. The instrument used in this research was questionnaire and test. Data analysis technique used was Wilcoxon test. It concluded that there was an increase in motivation and student achievement of class VII on linear equation system material by using the learning model of Team Accelerated Instruction. Based on the results of the learning model Team Accelerated Instruction can be used as a variation model in learning mathematics.
NASA Technical Reports Server (NTRS)
Poole, L. R.
1974-01-01
A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method.
Kawafha, Mariam M; Tawalbeh, Loai Issa
2015-04-01
The purpose of this study was to examine the effect of an asthma education program on schoolteachers' knowledge. Pre-test-post-test experimental randomized controlled design was used. A multistage-cluster sampling technique was used to randomly select governorate, primary schools, and schoolteachers. Schoolteachers were randomly assigned either to the experimental group (n = 36) and attended three educational sessions or to the control group (n = 38) who did not receive any intervention. Knowledge about asthma was measured using the Asthma General Knowledge Questionnaire for Adults (AGKQA). The results indicated that teachers in the experimental group showed significantly (p < .001) higher knowledge of asthma in the first post-test and the second post-test compared with those in the control group. Implementing asthma education enhanced schoolteachers' knowledge of asthma. The asthma education program should target schoolteachers to improve knowledge about asthma. © The Author(s) 2014.
Sworen, John C; Smith, Jason A; Wagener, Kenneth B; Baugh, Lisa S; Rucker, Steven P
2003-02-26
The structure of random ethylene/propylene (EP) copolymers has been modeled using step polymerization chemistry. Six ethylene/propylene model copolymers have been prepared via acyclic diene metathesis (ADMET) polymerization and characterized for primary and higher level structure using in-depth NMR, IR, DSC, WAXD, and GPC analysis. These copolymers possess 1.5, 7.1, 13.6, 25.0, 43.3, and 55.6 methyl branches per 1000 carbons. Examination of these macromolecules by IR and WAXD analysis has demonstrated the first hexagonal phase in EP copolymers containing high ethylene content (90%) without the influence of sample manipulation (temperature, pressure, or radiation). Thermal behavior studies have shown that the melting point and heat of fusion decrease as the branch content increases. Further, comparisons have been made between these random ADMET EP copolymers, random EP copolymers made by typical chain addition techniques, and precisely branched ADMET EP copolymers.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The mathematics of the technique is presented in addition to the results of computer simulations conducted to demonstrate the prediction of the response of the system and the random forcing function initially introduced to excite the system.
Lentz, Robert J; Argento, A Christine; Colby, Thomas V; Rickman, Otis B; Maldonado, Fabien
2017-07-01
Transbronchial lung biopsy with a cryoprobe, or cryobiopsy, is a promising new bronchoscopic biopsy technique capable of obtaining larger and better-preserved samples than previously possible using traditional biopsy forceps. Over two dozen case series and several small randomized trials are now available describing experiences with this technique, largely for the diagnosis of diffuse parenchymal lung disease (DPLD), in which the reported diagnostic yield is typically 70% to 80%. Cryobiopsy technique varies widely between centers and this predominantly single center-based retrospective literature heterogeneously defines diagnostic yield and complications, limiting the degree to which this technique can be compared between centers or to surgical lung biopsy (SLB). This review explores the broad range of cryobiopsy techniques currently in use, their rationale, the current state of the literature, and suggestions for the direction of future study into this promising but unproven procedure.
Geng, Guo-Zhu; Gao, Ge; Ruan, Yu-Hua; Yu, Ming-Run; Zhou, Yun-Hua
2016-03-05
Human immunodeficiency virus (HIV) is spreading rapidly among men who have sex with men (MSM) in China. Anonymous questionnaires or direct interviews have been frequently used to study their behavior. The aim of the study was to describe the behavioral risk profile of the MSM in Beijing using the randomized response techniques (RRTs). A cross-sectional survey of sexual behavior among a sample of MSM was conducted in two HIV counseling and testing clinics in Beijing. The survey was carried out with an anonymous questionnaire containing sensitive questions on sexual behavior. To obtain the honest responses to the sensitive questions, three distinctive RRTs were used in the questionnaire: (1) Additive randomized response model for quantitative questions, (2) randomized response model for multiple choice questions, and (3) Simmons randomized response model for binomial questions. Formulae for the point estimate, variance, and confidence interval (CI) were provided for each specific model. Using RRTs in a sample of 659 participants, the mean age at first homosexual encounter was estimated to be 21.7 years (95% CI: 21.2-22.2), and each had sex with about three (2.9, 95% CI: 2.4-3.4) male partners on average in the past month. The estimated rate for consistent condom use was 56.4% (95% CI: 50.1-62.8%). In addition, condom was estimated to be used among 80.0% (95% CI: 74.1-85.9%) of the population during last anal sex with a male partner. Our study employed RRTs in a survey containing questions on sexual behavior among MSM, and the results showed that RRT might be a useful tool to obtain truthful feedback on sensitive information such as sexual behavior from the respondents, especially in traditional Chinese cultural settings.
Transcranial direct current stimulation in psychiatric disorders
Tortella, Gabriel; Casati, Roberta; Aparicio, Luana V M; Mantovani, Antonio; Senço, Natasha; D’Urso, Giordano; Brunelin, Jerome; Guarienti, Fabiana; Selingardi, Priscila Mara Lorencini; Muszkat, Débora; Junior, Bernardo de Sampaio Pereira; Valiengo, Leandro; Moffa, Adriano H; Simis, Marcel; Borrione, Lucas; Brunoni, André R
2015-01-01
The interest in non-invasive brain stimulation techniques is increasing in recent years. Among these techniques, transcranial direct current stimulation (tDCS) has been the subject of great interest among researchers because of its easiness to use, low cost, benign profile of side effects and encouraging results of research in the field. This interest has generated several studies and randomized clinical trials, particularly in psychiatry. In this review, we provide a summary of the development of the technique and its mechanism of action as well as a review of the methodological aspects of randomized clinical trials in psychiatry, including studies in affective disorders, schizophrenia, obsessive compulsive disorder, child psychiatry and substance use disorder. Finally, we provide an overview of tDCS use in cognitive enhancement as well as a discussion regarding its clinical use and regulatory and ethical issues. Although many promising results regarding tDCS efficacy were described, the total number of studies is still low, highlighting the need of further studies aiming to replicate these findings in larger samples as to provide a definite picture regarding tDCS efficacy in psychiatry. PMID:25815258
Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad
2015-01-01
Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.
Arango-Sabogal, Juan C; Côté, Geneviève; Paré, Julie; Labrecque, Olivia; Roy, Jean-Philippe; Buczinski, Sébastien; Doré, Elizabeth; Fairbrother, Julie H; Bissonnette, Nathalie; Wellemans, Vincent; Fecteau, Gilles
2016-07-01
Mycobacterium avium ssp. paratuberculosis (MAP) is the etiologic agent of Johne's disease, a chronic contagious enteritis of ruminants that causes major economic losses. Several studies, most involving large free-stall herds, have found environmental sampling to be a suitable method for detecting MAP-infected herds. In eastern Canada, where small tie-stall herds are predominant, certain conditions and management practices may influence the survival and transmission of MAP and recovery (isolation). Our objective was to estimate the performance of a standardized environmental and targeted pooled sampling technique for the detection of MAP-infected tie-stall dairy herds. Twenty-four farms (19 MAP-infected and 5 non-infected) were enrolled, but only 20 were visited twice in the same year, to collect 7 environmental samples and 2 pooled samples (sick cows and cows with poor body condition). Concurrent individual sampling of all adult cows in the herds was also carried out. Isolation of MAP was achieved using the MGIT Para TB culture media and the BACTEC 960 detection system. Overall, MAP was isolated in 7% of the environmental cultures. The sensitivity of the environmental culture was 44% [95% confidence interval (CI): 20% to 70%] when combining results from 2 different herd visits and 32% (95% CI: 13% to 57%) when results from only 1 random herd visit were used. The best sampling strategy was to combine samples from the manure pit, gutter, sick cows, and cows with poor body condition. The standardized environmental sampling technique and the targeted pooled samples presented in this study is an alternative sampling strategy to costly individual cultures for detecting MAP-infected tie-stall dairies. Repeated samplings may improve the detection of MAP-infected herds.
Calibrationless parallel magnetic resonance imaging: a joint sparsity model.
Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab
2013-12-05
State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.
Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.
Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten
2018-01-01
Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence, of sample size calculations, blinding techniques, and randomization procedures could better enable readers to evaluate potential sources of bias in animal-experimental research manuscripts. Future studies should assess whether such steps lead to improved translation of animal-experimental anesthesia research into successful clinical trials.
ERIC Educational Resources Information Center
Alika, Henrietta Ijeoma; Ohanaka, Blessing Ijeoma
2013-01-01
This paper examined the role of counselling, and parental encouragement on re-entry of adolescents into secondary school in Abia State, Nigeria. A total of 353 adolescents who re-entered school were selected from six secondary schools in the State through a simple random sampling technique. A validated questionnaire was used for data analysis.…
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
ERIC Educational Resources Information Center
Adelodun, Gboyega Adelowo; Asiru, Abdulahi Babatunde
2015-01-01
This study examined the role played by instructional resources in enhancing performance of students, especially that of high-achievers, in English Language. The study is descriptive in nature and it adopted a survey design. Simple random sampling technique was used for the selection of fifty (50) SSI-SSIII students from five schools in Ibadan…
Gerald E. Rehfeldt; Nicholas L. Crookston; Cuauhtemoc Saenz-Romero; Elizabeth M. Campbell
2012-01-01
Data points intensively sampling 46 North American biomes were used to predict the geographic distribution of biomes from climate variables using the Random Forests classification tree. Techniques were incorporated to accommodate a large number of classes and to predict the future occurrence of climates beyond the contemporary climatic range of the biomes. Errors of...
Developing Confidence Limits For Reliability Of Software
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1991-01-01
Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.
ERIC Educational Resources Information Center
Belay, Sintayehu
2016-01-01
This study examined the contribution of teachers' Continuous Professional Development (CPD) to quality of education and its challenging factors related with teachers. For this purpose, the study employed descriptive survey method. 76 or 40.86% participant teachers were selected using simple random sampling technique. Close-ended questionnaire was…
ERIC Educational Resources Information Center
Lawal, B. O.; Viatonu, Olumuyiwa
2017-01-01
The study investigated students' access to and utilization of some learning resources in selected public and private universities in southwest Nigeria. Stratified random sampling technique was used to select 585 (295 public and 290 private) students from 12 (six public and six private) universities in southwest Nigeria. Two instruments--Cost and…
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Teoh, Shao Thing; Kitamura, Miki; Nakayama, Yasumune; Putri, Sastia; Mukai, Yukio; Fukusaki, Eiichiro
2016-08-01
In recent years, the advent of high-throughput omics technology has made possible a new class of strain engineering approaches, based on identification of possible gene targets for phenotype improvement from omic-level comparison of different strains or growth conditions. Metabolomics, with its focus on the omic level closest to the phenotype, lends itself naturally to this semi-rational methodology. When a quantitative phenotype such as growth rate under stress is considered, regression modeling using multivariate techniques such as partial least squares (PLS) is often used to identify metabolites correlated with the target phenotype. However, linear modeling techniques such as PLS require a consistent metabolite-phenotype trend across the samples, which may not be the case when outliers or multiple conflicting trends are present in the data. To address this, we proposed a data-mining strategy that utilizes random sample consensus (RANSAC) to select subsets of samples with consistent trends for construction of better regression models. By applying a combination of RANSAC and PLS (RANSAC-PLS) to a dataset from a previous study (gas chromatography/mass spectrometry metabolomics data and 1-butanol tolerance of 19 yeast mutant strains), new metabolites were indicated to be correlated with tolerance within certain subsets of the samples. The relevance of these metabolites to 1-butanol tolerance were then validated from single-deletion strains of corresponding metabolic genes. The results showed that RANSAC-PLS is a promising strategy to identify unique metabolites that provide additional hints for phenotype improvement, which could not be detected by traditional PLS modeling using the entire dataset. Copyright © 2016 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Valenza, Marie Carmen; Cabrera-Martos, Irene; Torres-Sánchez, Irene; Garcés-García, Aurelio; Mateos-Toset, Sara; Valenza-Demet, Gerald
2015-11-01
Taking into account the complex structure of the diaphragm and its important role in the postural chain, the authors were prompted to check the effects of a diaphragm technique on hamstring flexibility. To evaluate the effects of the doming-of-the-diaphragm (DD) technique on hamstrings flexibility and spine mobility. Randomized placebo-controlled trial. University laboratory. Sixty young adults with short-hamstring syndrome were included in this randomized clinical trial using a between-groups design. The sample was randomly allocated to a placebo group (n = 30) or an intervention group (n = 30). Duration, position, and therapist were the same for both treatments. Hamstring flexibility was assessed using the forward-flexion-distance (FFD) and popliteal-angle test (PAT). Spinal motion was evaluated using the modified Schober test and cervical range of movement. Two-way ANOVA afforded pre- to postintervention statistically significant differences (P < .001) in the intervention group compared with the placebo group for hamstring flexibility measured by the FFD (mean change 4.59 ± 5.66 intervention group vs 0.71 ± 2.41 placebo group) and the PAT (mean change intervention group 6.81 ± 8.52 vs placebo group 0.57 ± 4.41). Significant differences (P < .05) were also found in the modified Schober test (mean change intervention group -1.34 ± 3.95 vs placebo group 1.02 ± 3.05) and cervical range of movement. Significant between-groups differences (P < .05) were also found in all the variables measured. The DD technique provides sustained improvement in hamstring flexibility and spine mobility.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei
2015-12-01
An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
Assessment of computer-related health problems among post-graduate nursing students.
Khan, Shaheen Akhtar; Sharma, Veena
2013-01-01
The study was conducted to assess computer-related health problems among post-graduate nursing students and to develop a Self Instructional Module for prevention of computer-related health problems in a selected university situated in Delhi. A descriptive survey with co-relational design was adopted. A total of 97 samples were selected from different faculties of Jamia Hamdard by multi stage sampling with systematic random sampling technique. Among post-graduate students, majority of sample subjects had average compliance with computer-related ergonomics principles. As regards computer related health problems, majority of post graduate students had moderate computer-related health problems, Self Instructional Module developed for prevention of computer-related health problems was found to be acceptable by the post-graduate students.
[Krigle estimation and its simulated sampling of Chilo suppressalis population density].
Yuan, Zheming; Bai, Lianyang; Wang, Kuiwu; Hu, Xiangyue
2004-07-01
In order to draw up a rational sampling plan for the larvae population of Chilo suppressalis, an original population and its two derivative populations, random population and sequence population, were sampled and compared with random sampling, gap-range-random sampling, and a new systematic sampling integrated Krigle interpolation and random original position. As for the original population whose distribution was up to aggregative and dependence range in line direction was 115 cm (6.9 units), gap-range-random sampling in line direction was more precise than random sampling. Distinguishing the population pattern correctly is the key to get a better precision. Gap-range-random sampling and random sampling are fit for aggregated population and random population, respectively, but both of them are difficult to apply in practice. Therefore, a new systematic sampling named as Krigle sample (n = 441) was developed to estimate the density of partial sample (partial estimation, n = 441) and population (overall estimation, N = 1500). As for original population, the estimated precision of Krigle sample to partial sample and population was better than that of investigation sample. With the increase of the aggregation intensity of population, Krigel sample was more effective than investigation sample in both partial estimation and overall estimation in the appropriate sampling gap according to the dependence range.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.
Calon, Tim G A; van Hoof, Marc; van den Berge, Herbert; de Bruijn, Arthur J G; van Tongeren, Joost; Hof, Janny R; Brunings, Jan Wouter; Jonhede, Sofia; Anteunis, Lucien J C; Janssen, Miranda; Joore, Manuela A; Holmberg, Marcus; Johansson, Martin L; Stokroos, Robert J
2016-11-09
Over the last years, less invasive surgical techniques with soft tissue preservation for bone conduction hearing implants (BCHI) have been introduced such as the linear incision technique combined with a punch. Results using this technique seem favorable in terms of rate of peri-abutment dermatitis (PAD), esthetics, and preservation of skin sensibility. Recently, a new standardized surgical technique for BCHI placement, the Minimally Invasive Ponto Surgery (MIPS) technique has been developed by Oticon Medical AB (Askim, Sweden). This technique aims to standardize surgery by using a novel surgical instrumentation kit and minimize soft tissue trauma. A multicenter randomized controlled trial is designed to compare the MIPS technique to the linear incision technique with soft tissue preservation. The primary investigation center is Maastricht University Medical Center. Sixty-two participants will be included with a 2-year follow-up period. Parameters are introduced to quantify factors such as loss of skin sensibility, dehiscence of the skin next to the abutment, skin overgrowth, and cosmetic results. A new type of sampling method is incorporated to aid in the estimation of complications. To gain further understanding of PAD, swabs and skin biopsies are collected during follow-up visits for evaluation of the bacterial profile and inflammatory cytokine expression. The primary objective of the study is to compare the incidence of PAD during the first 3 months after BCHI placement. Secondary objectives include the assessment of parameters related to surgery, wound healing, pain, loss of sensibility of the skin around the implant, implant extrusion rate, implant stability measurements, dehiscence of the skin next to the abutment, and esthetic appeal. Tertiary objectives include assessment of other factors related to PAD and a health economic evaluation. This is the first trial to compare the recently developed MIPS technique to the linear incision technique with soft tissue preservation for BCHI surgery. Newly introduced parameters and sampling method will aid in the prediction of results and complications after BCHI placement. Registered at the CCMO register in the Netherlands on 24 November 2014: NL50072.068.14 . Retrospectively registered on 21 April 2015 at ClinicalTrials.gov: NCT02438618 . This trial is sponsored by Oticon Medical AB.
Sengaloundeth, Sivong; Green, Michael D; Fernández, Facundo M; Manolin, Ot; Phommavong, Khamlieng; Insixiengmay, Vongsavanh; Hampton, Christina Y; Nyadong, Leonard; Mildenhall, Dallas C; Hostetler, Dana; Khounsaknalath, Lamphet; Vongsack, Latsamy; Phompida, Samlane; Vanisaveth, Viengxay; Syhakhang, Lamphone; Newton, Paul N
2009-01-01
Background Counterfeit oral artesunate has been a major public health problem in mainland SE Asia, impeding malaria control. A countrywide stratified random survey was performed to determine the availability and quality of oral artesunate in pharmacies and outlets (shops selling medicines) in the Lao PDR (Laos). Methods In 2003, 'mystery' shoppers were asked to buy artesunate tablets from 180 outlets in 12 of the 18 Lao provinces. Outlets were selected using stratified random sampling by investigators not involved in sampling. Samples were analysed for packaging characteristics, by the Fast Red Dye test, high-performance liquid chromatography (HPLC), mass spectrometry (MS), X-ray diffractometry and pollen analysis. Results Of 180 outlets sampled, 25 (13.9%) sold oral artesunate. Outlets selling artesunate were more commonly found in the more malarious southern Laos. Of the 25 outlets, 22 (88%; 95%CI 68–97%) sold counterfeit artesunate, as defined by packaging and chemistry. No artesunate was detected in the counterfeits by any of the chemical analysis techniques and analysis of the packaging demonstrated seven different counterfeit types. There was complete agreement between the Fast Red dye test, HPLC and MS analysis. A wide variety of wrong active ingredients were found by MS. Of great concern, 4/27 (14.8%) fakes contained detectable amounts of artemisinin (0.26–115.7 mg/tablet). Conclusion This random survey confirms results from previous convenience surveys that counterfeit artesunate is a severe public health problem. The presence of artemisinin in counterfeits may encourage malaria resistance to artemisinin derivatives. With increasing accessibility of artemisinin-derivative combination therapy (ACT) in Laos, the removal of artesunate monotherapy from pharmacies may be an effective intervention. PMID:19638225
Sengaloundeth, Sivong; Green, Michael D; Fernández, Facundo M; Manolin, Ot; Phommavong, Khamlieng; Insixiengmay, Vongsavanh; Hampton, Christina Y; Nyadong, Leonard; Mildenhall, Dallas C; Hostetler, Dana; Khounsaknalath, Lamphet; Vongsack, Latsamy; Phompida, Samlane; Vanisaveth, Viengxay; Syhakhang, Lamphone; Newton, Paul N
2009-07-28
Counterfeit oral artesunate has been a major public health problem in mainland SE Asia, impeding malaria control. A countrywide stratified random survey was performed to determine the availability and quality of oral artesunate in pharmacies and outlets (shops selling medicines) in the Lao PDR (Laos). In 2003, 'mystery' shoppers were asked to buy artesunate tablets from 180 outlets in 12 of the 18 Lao provinces. Outlets were selected using stratified random sampling by investigators not involved in sampling. Samples were analysed for packaging characteristics, by the Fast Red Dye test, high-performance liquid chromatography (HPLC), mass spectrometry (MS), X-ray diffractometry and pollen analysis. Of 180 outlets sampled, 25 (13.9%) sold oral artesunate. Outlets selling artesunate were more commonly found in the more malarious southern Laos. Of the 25 outlets, 22 (88%; 95%CI 68-97%) sold counterfeit artesunate, as defined by packaging and chemistry. No artesunate was detected in the counterfeits by any of the chemical analysis techniques and analysis of the packaging demonstrated seven different counterfeit types. There was complete agreement between the Fast Red dye test, HPLC and MS analysis. A wide variety of wrong active ingredients were found by MS. Of great concern, 4/27 (14.8%) fakes contained detectable amounts of artemisinin (0.26-115.7 mg/tablet). This random survey confirms results from previous convenience surveys that counterfeit artesunate is a severe public health problem. The presence of artemisinin in counterfeits may encourage malaria resistance to artemisinin derivatives. With increasing accessibility of artemisinin-derivative combination therapy (ACT) in Laos, the removal of artesunate monotherapy from pharmacies may be an effective intervention.
Creating Turbulent Flow Realizations with Generative Adversarial Networks
NASA Astrophysics Data System (ADS)
King, Ryan; Graf, Peter; Chertkov, Michael
2017-11-01
Generating valid inflow conditions is a crucial, yet computationally expensive, step in unsteady turbulent flow simulations. We demonstrate a new technique for rapid generation of turbulent inflow realizations that leverages recent advances in machine learning for image generation using a deep convolutional generative adversarial network (DCGAN). The DCGAN is an unsupervised machine learning technique consisting of two competing neural networks that are trained against each other using backpropagation. One network, the generator, tries to produce samples from the true distribution of states, while the discriminator tries to distinguish between true and synthetic samples. We present results from a fully-trained DCGAN that is able to rapidly draw random samples from the full distribution of possible inflow states without needing to solve the Navier-Stokes equations, eliminating the costly process of spinning up inflow turbulence. This suggests a new paradigm in physics informed machine learning where the turbulence physics can be encoded in either the discriminator or generator. Finally, we also propose additional applications such as feature identification and subgrid scale modeling.
NASA Astrophysics Data System (ADS)
Revida, Erika; Yanti Siahaan, Asima; Purba, Sukarman
2018-03-01
The objective of the research was to analyze the influence of social capital towards the quality of community tourism service In Lake Toba Parapat North Sumatera. The method used the combination between quantitative and qualitative research. Sample was taken from the Community in the area around Lake Toba Parapat North Sumatera with sample of 150 head of the family. The sampling technique was Simple Random Sampling. Data collection techniques used documentary studies, questionnaires, interview and observations, while the data analysis used were Product Moment and Simple Linear Regression analysis. The results of the research showed that there were positive and significant influence between Social Capital and the Quality of Community Tourism Services in Lake Toba Parapat North Sumatera. This research recommend the need to enhance Social Capital such as trust, norms and network and the quality of community tourism services such as Tangibles, Reliability, Responsiveness, Assurance, and Empathy by giving communications, information and education continuously from the families, institutions formal and informal, community leaders, religious figures and all communities in Lake Toba Parapat North Sumatera.
Molecular diagnosis of strongyloidiasis in a population of an endemic area through nested-PCR.
Sharifdini, Meysam; Keyhani, Amir; Eshraghian, Mohammad Reza; Beigom Kia, Eshrat
2018-01-01
This study is aimed to diagnose and analyze strongyloidiasis in a population of an endemic area of Iran using nested-PCR, coupled with parasitological methods. Screening of strongyloidiasis infected people using reliable diagnostic techniques are essential to decrease the mortality and morbidity associated with this infection. Molecular methods have been proved to be highly sensitive and specific for detection of Strongyloides stercoralis in stool samples. A total of 155 fresh single stool samples were randomly collected from residents of north and northwest of Khouzestan Province, Iran. All samples were examined by parasitological methods including formalin-ether concentration and nutrient agar plate culture, and molecular method of nested-PCR. Infections with S. stercoralis were analyzed according to demographic criteria. Based on the results of nested-PCR method 15 cases (9.7%) were strongyloidiasis positive. Nested-PCR was more sensitive than parasitological techniques on single stool sampling. Elderly was the most important population index for higher infectivity with S. stercoralis . In endemic areas of S. stercoralis , old age should be considered as one of the most important risk factors of infection, especially among the immunosuppressed individuals.
Systematic versus random sampling in stereological studies.
West, Mark J
2012-12-01
The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.
Inoue, Ryo; Tsukahara, Takamitsu; Sunaba, Chinatsu; Itoh, Mitsugi; Ushida, Kazunari
2007-04-01
The combination of Flinders Technology Associates filter papers (FTA cards) and real-time PCR was examined to establish a simple and rapid technique for the detection of porcine reproductive and respiratory syndrome virus (PRRSV) from whole pig blood. A modified live PRRS vaccine was diluted with either sterilised saline or pig whole blood, and the suspensions were applied onto the FTA cards. The real-time RT-PCR detection of PRRSV was performed directly with the samples applied to the FTA card without the RNA extraction step. Six whole blood samples from at random selected piglets in the PRRSV infected farm were also assayed in this study. The expected PCR product was successfully amplified from either saline diluted or pig whole blood diluted vaccine. The same PCR ampliocon was detected from all blood samples assayed in this study. This study suggested that the combination of an FTA card and real-time PCR is a rapid and easy technique for the detection of PRRSV. This technique can remarkably shorten the time required for PRRSV detection from whole blood and makes the procedure much easier.
A comparative test of the investigator as a variable in aging quail
Rosene, W.; Fitch, F.
1956-01-01
To test the reliability of current techniques, five biologists appraised the ages of 200 quail from a random sample of wings collected during the 1952-53 hunting season in Alabama. Attempt was made to distinguish adults from juveniles, to ascertain the stage of post-nuptial and post-juvenile molts, and to estimate the age of juveniles according to days or weeks. Three 'problem' wings in this sample had molt characteristics somewhat eauallv divided between adult and juvenile classes; two wings called 'questionable' had all molt characteristics except one of either age group. A 3.5 per cent disparity occurred between investigators in their classification of adult and juvenile age groups. This included not only 'problem' and 'questionable' wings, but also 'obvious errors.' Individual differences were greater than 3.5 per cent but cancelled out. This study emphasizes the need of working with large samples of birds of a known age in order to know more concerning molt variations. Until aging techniques can be refined, it is believed that investigators should be fully familiar with existing methods and their weaknesses. Also, it appears important that reports on aging should indicate clearly the techniques used.
Age and growth of rock bass in eastern Lake Ontario
Wolfert, David R.
1980-01-01
To test the reliability of current techniques, five biologists appraised the ages of 200 quail from a random sample of wings collected during the 1952-53 hunting season in Alabama. Attempt was made to distinguish adults from juveniles, to ascertain the stage of post-nuptial and post-juvenile molts, and to estimate the age of juveniles according to days or weeks. Three 'problem' wings in this sample had molt characteristics somewhat eauallv divided between adult and juvenile classes; two wings called 'questionable' had all molt characteristics except one of either age group. A 3.5 per cent disparity occurred between investigators in their classification of adult and juvenile age groups. This included not only 'problem' and 'questionable' wings, but also 'obvious errors.' Individual differences were greater than 3.5 per cent but cancelled out. This study emphasizes the need of working with large samples of birds of a known age in order to know more concerning molt variations. Until aging techniques can be refined, it is believed that investigators should be fully familiar with existing methods and their weaknesses. Also, it appears important that reports on aging should indicate clearly the techniques used.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Scalable randomized benchmarking of non-Clifford gates
NASA Astrophysics Data System (ADS)
Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay
Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Trainor, Patrick J; DeFilippis, Andrew P; Rai, Shesh N
2017-06-21
Statistical classification is a critical component of utilizing metabolomics data for examining the molecular determinants of phenotypes. Despite this, a comprehensive and rigorous evaluation of the accuracy of classification techniques for phenotype discrimination given metabolomics data has not been conducted. We conducted such an evaluation using both simulated and real metabolomics datasets, comparing Partial Least Squares-Discriminant Analysis (PLS-DA), Sparse PLS-DA, Random Forests, Support Vector Machines (SVM), Artificial Neural Network, k -Nearest Neighbors ( k -NN), and Naïve Bayes classification techniques for discrimination. We evaluated the techniques on simulated data generated to mimic global untargeted metabolomics data by incorporating realistic block-wise correlation and partial correlation structures for mimicking the correlations and metabolite clustering generated by biological processes. Over the simulation studies, covariance structures, means, and effect sizes were stochastically varied to provide consistent estimates of classifier performance over a wide range of possible scenarios. The effects of the presence of non-normal error distributions, the introduction of biological and technical outliers, unbalanced phenotype allocation, missing values due to abundances below a limit of detection, and the effect of prior-significance filtering (dimension reduction) were evaluated via simulation. In each simulation, classifier parameters, such as the number of hidden nodes in a Neural Network, were optimized by cross-validation to minimize the probability of detecting spurious results due to poorly tuned classifiers. Classifier performance was then evaluated using real metabolomics datasets of varying sample medium, sample size, and experimental design. We report that in the most realistic simulation studies that incorporated non-normal error distributions, unbalanced phenotype allocation, outliers, missing values, and dimension reduction, classifier performance (least to greatest error) was ranked as follows: SVM, Random Forest, Naïve Bayes, sPLS-DA, Neural Networks, PLS-DA and k -NN classifiers. When non-normal error distributions were introduced, the performance of PLS-DA and k -NN classifiers deteriorated further relative to the remaining techniques. Over the real datasets, a trend of better performance of SVM and Random Forest classifier performance was observed.
FOCIS: A forest classification and inventory system using LANDSAT and digital terrain data
NASA Technical Reports Server (NTRS)
Strahler, A. H.; Franklin, J.; Woodcook, C. E.; Logan, T. L.
1981-01-01
Accurate, cost-effective stratification of forest vegetation and timber inventory is the primary goal of a Forest Classification and Inventory System (FOCIS). Conventional timber stratification using photointerpretation can be time-consuming, costly, and inconsistent from analyst to analyst. FOCIS was designed to overcome these problems by using machine processing techniques to extract and process tonal, textural, and terrain information from registered LANDSAT multispectral and digital terrain data. Comparison of samples from timber strata identified by conventional procedures showed that both have about the same potential to reduce the variance of timber volume estimates over simple random sampling.
Effects of Vibration Therapy in Pediatric Immunizations.
Benjamin, Arika L; Hendrix, Thomas J; Woody, Jacque L
2016-01-01
A randomized clinical trial of 100 children (52 boys, 48 girls) ages 2 months to 7 years was conducted to evaluate the effect of vibration therapy without cold analgesia on pain. A convenience sample was recruited at two sites: a publicly funded, free immunization clinic and a private group pediatric practice. Participants were randomly assigned to receive vibration therapy via a specialized vibrating device or standard care. All children regardless of intervention group were allowed to be distracted and soothed by the parent. Pain was evaluated using the FLACC score, which two nurses assessed at three points in time: prior to, during, and after the injection(s). Data were analyzed using a two-independent samples-paired t-test. Results show that vibration therapy had no effect on pain scores in the younger age groups studied (2 months ≤ 1 year, > 1 year ≤ 4 years). In the oldest age group (> 4 to 7 years of age), a heightened pain reading was found in the period from preinjection to post-injection periods (p = 0.045). These results indicate that the addition of vibration therapy (without cold analgesia) to standard soothing techniques is no more effective in reducing immunization pain than standard soothing techniques alone, and thus, is not indicated for use with immunization pain. Recommendations include further evaluation of interventions.
A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.
Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh
2018-04-26
Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.
Accelerated 2D magnetic resonance spectroscopy of single spins using matrix completion
NASA Astrophysics Data System (ADS)
Scheuer, Jochen; Stark, Alexander; Kost, Matthias; Plenio, Martin B.; Naydenov, Boris; Jelezko, Fedor
2015-12-01
Two dimensional nuclear magnetic resonance (NMR) spectroscopy is one of the major tools for analysing the chemical structure of organic molecules and proteins. Despite its power, this technique requires long measurement times, which, particularly in the recently emerging diamond based single molecule NMR, limits its application to stable samples. Here we demonstrate a method which allows to obtain the spectrum by collecting only a small fraction of the experimental data. Our method is based on matrix completion which can recover the full spectral information from randomly sampled data points. We confirm experimentally the applicability of this technique by performing two dimensional electron spin echo envelope modulation (ESEEM) experiments on a two spin system consisting of a single nitrogen vacancy (NV) centre in diamond coupled to a single 13C nuclear spin. The signal to noise ratio of the recovered 2D spectrum is compared to the Fourier transform of randomly subsampled data, where we observe a strong suppression of the noise when the matrix completion algorithm is applied. We show that the peaks in the spectrum can be obtained with only 10% of the total number of the data points. We believe that our results reported here can find an application in all types of two dimensional spectroscopy, as long as the measured matrices have a low rank.
Steinmann, Peter; Zhou, Xiao-Nong; Du, Zun-Wei; Jiang, Jin-Yong; Wang, Li-Bo; Wang, Xue-Zhong; Li, Lan-Hua; Marti, Hanspeter; Utzinger, Jürg
2007-01-01
Background Strongyloides stercoralis is a neglected soil-transmitted helminth species, and there is a lack of parasitologic and epidemiologic data pertaining to this parasite in China and elsewhere. We studied the local occurrence of S. stercoralis in a village in Yunnan province, China, and comparatively assessed the performance of different diagnostic methods. Methodology/Principal Findings Multiple stool samples from a random population sample were subjected to the Kato-Katz method, an ether-concentration technique, the Koga agar plate method, and the Baermann technique. Among 180 participants who submitted at least 2 stool samples, we found a S. stercoralis prevalence of 11.7%. Males had a significantly higher prevalence than females (18.3% versus 6.1%, p = 0.011), and infections were absent in individuals <15 years of age. Infections were only detected by the Baermann (highest sensitivity) and the Koga agar plate method, but neither with the Kato-Katz nor an ether-concentration technique. The examination of 3 stool samples rather than a single one resulted in the detection of 62% and 100% more infections when employing the Koga agar plate and the Baermann technique, respectively. The use of a mathematical model revealed a ‘true’ S. stercoralis prevalence in the current setting of up to 16.3%. Conclusions/Significance We conclude that S. stercoralis is endemic in the southern part of Yunnan province and that differential diagnosis and integrated control of intestinal helminth infections needs more pointed emphasis in rural China. PMID:17989788
Barroso, Gerardo; Chaya, Miguel; Bolaños, Rubén; Rosado, Yadira; García León, Fernando; Ibarrola, Eduardo
2005-05-01
To evaluate sperm recovery and total sperm motility in three different sperm preparation techniques (density gradient, simple washing and swim-up). A total of 290 subjects were randomly evaluated from November 2001 to March 2003. The density gradient method required Isolate (upper and lower layers). Centrifugation was performed at 400 g for 10 minutes and evaluation was done using the Makler counting chamber. The simple washing method included the use of HTF-M complemented with 7.5% of SSS, with centrifugation at 250 g, obtaining at the end 0.5 mL of the sperm sample. The swim-up method required HTF-M complemented with 7.5% of SSS, with an incubation period of 60 minutes at 37 degrees C. The demographic characteristics evaluated through their standard error, 95% ICC, and 50th percentile were similar. The application of multiple comparison tests and analysis of variance showed significant differences between the sperm preparations before and after capacitation. It was observed a superior recovery rate with the density gradient and swim-up methods; nevertheless, the samples used for the simple washing method showed a diminished sperm recovery from the original sample. Sperm preparation techniques have become very useful in male infertility treatments allowing higher sperm recovery and motility rates. The seminal parameters evaluated from the original sperm sample will determine the best sperm preparation technique in those patients who require it.
Random whole metagenomic sequencing for forensic discrimination of soils.
Khodakova, Anastasia S; Smith, Renee J; Burgoyne, Leigh; Abarno, Damien; Linacre, Adrian
2014-01-01
Here we assess the ability of random whole metagenomic sequencing approaches to discriminate between similar soils from two geographically distinct urban sites for application in forensic science. Repeat samples from two parklands in residential areas separated by approximately 3 km were collected and the DNA was extracted. Shotgun, whole genome amplification (WGA) and single arbitrarily primed DNA amplification (AP-PCR) based sequencing techniques were then used to generate soil metagenomic profiles. Full and subsampled metagenomic datasets were then annotated against M5NR/M5RNA (taxonomic classification) and SEED Subsystems (metabolic classification) databases. Further comparative analyses were performed using a number of statistical tools including: hierarchical agglomerative clustering (CLUSTER); similarity profile analysis (SIMPROF); non-metric multidimensional scaling (NMDS); and canonical analysis of principal coordinates (CAP) at all major levels of taxonomic and metabolic classification. Our data showed that shotgun and WGA-based approaches generated highly similar metagenomic profiles for the soil samples such that the soil samples could not be distinguished accurately. An AP-PCR based approach was shown to be successful at obtaining reproducible site-specific metagenomic DNA profiles, which in turn were employed for successful discrimination of visually similar soil samples collected from two different locations.
Lead Determination and Heterogeneity Analysis in Soil from a Former Firing Range
NASA Astrophysics Data System (ADS)
Urrutia-Goyes, Ricardo; Argyraki, Ariadne; Ornelas-Soto, Nancy
2017-07-01
Public places can have an unknown past of pollutants deposition. The exposition to such contaminants can create environmental and health issues. The characterization of a former firing range in Athens, Greece will allow its monitoring and encourage its remediation. This study is focused on Pb contamination in the site due to its presence in ammunition. A dense sampling design with 91 location (10 m apart) was used to determine the spatial distribution of the element in the surface soil of the study area. Duplicates samples were also collected one meter apart from 8 random locations to estimate the heterogeneity of the site. Elemental concentrations were measured using a portable XRF device after simple sample homogenization in the field. Robust Analysis of Variance showed that the contributions to the total variance were 11% from sampling, 1% analytical, and 88% geochemical; reflecting the suitability of the technique. Moreover, the extended random uncertainty relative to the mean concentration was 91.5%; confirming the high heterogeneity of the site. Statistical analysis defined a very high contamination in the area yielding to suggest the need for more in-depth analysis of other contaminants and possible health risks.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
NASA Astrophysics Data System (ADS)
Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.
2000-10-01
A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.
NASA Astrophysics Data System (ADS)
Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.
2002-09-01
We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.
A compressed sensing X-ray camera with a multilayer architecture
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.
2018-01-01
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
Prevalence of American foulbrood in asymptomatic apiaries of Kurdistan, Iran
Khezri, M.; Moharrami, M.; Modirrousta, H.; Torkaman, M.; Rokhzad, B.; Khanbabaie, H.
2018-01-01
Aim: Paenibacillus larvae subsp. larvae is the etiological agent of American foulbrood (AFB), the most virulent bacterial disease of honey bee brood worldwide. In many countries, AFB is a notifiable disease since it is highly contagious, in most cases incurable, and able to kill affected colonies. The aim of this study was to determine the prevalence of P. larvae subsp. larvae in Kurdistan province apiaries by polymerase chain reaction (PCR) technique. Materials and Methods: A total of 100 samples were randomly purchased from apiaries in Kurdistan, Iran. Apiaries were randomly sampled in accordance with the instructions of the veterinary organization from different provinces and were tested using PCR method and an exclusive primer of 16S rRNA for the presence of P. larvae subsp. larvae. Results: The results of this study indicated a low level of contamination with P. larvae subsp. larvae in the Kurdistan province. The number of positive samples obtained by PCR was 2%. Conclusion: Therefore, monitoring programs for this honeybee disease in Kurdistan should be developed and implemented to ensure that it is detected early and managed. PMID:29657417
Prevalence of American foulbrood in asymptomatic apiaries of Kurdistan, Iran.
Khezri, M; Moharrami, M; Modirrousta, H; Torkaman, M; Rokhzad, B; Khanbabaie, H
2018-03-01
Paenibacillus larvae subsp. larvae is the etiological agent of American foulbrood (AFB), the most virulent bacterial disease of honey bee brood worldwide. In many countries, AFB is a notifiable disease since it is highly contagious, in most cases incurable, and able to kill affected colonies. The aim of this study was to determine the prevalence of P. larvae subsp . larvae in Kurdistan province apiaries by polymerase chain reaction (PCR) technique. A total of 100 samples were randomly purchased from apiaries in Kurdistan, Iran. Apiaries were randomly sampled in accordance with the instructions of the veterinary organization from different provinces and were tested using PCR method and an exclusive primer of 16S rRNA for the presence of P. larvae subsp . larvae . The results of this study indicated a low level of contamination with P. larvae subsp . larvae in the Kurdistan province. The number of positive samples obtained by PCR was 2%. Therefore, monitoring programs for this honeybee disease in Kurdistan should be developed and implemented to ensure that it is detected early and managed.
Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets.
Shuryak, Igor
2017-01-01
The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected "signal"; (5) using several machine learning methods to test the "signal's" sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation.
Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets
Shuryak, Igor
2017-01-01
The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected “signal”; (5) using several machine learning methods to test the “signal’s” sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation. PMID:28068401
Chovanec, Zdenek; Veverkova, Lenka; Votava, Miroslav; Svoboda, Jiri; Jedlicka, Vaclav; Capov, Ivan
2014-12-01
A variety of methods exist to take samples from surgical site infections for cultivation; however, an unambiguous and suitable method has not yet been defined. The aim of our retrospective non-randomized study was to compare two non-invasive techniques of sampling material for microbiologic analysis in surgical practice. We compared bacteria cultured from samples obtained with the use of the swab technique, defined in our study as the gold standard, with the indirect imprint technique. A cotton-tipped swab (Copan, Brescia, Italy) was used; the imprints were taken using Whatman no. 4 filter paper (Macherey-Nagal, Duren, Germany) cut into 5×5 cm pieces placed on blood agar in a Petri dish. To culture the microorganisms in the microbiology laboratory, we used blood agar, UriSelect 4 medium (Bio-Rad, Marnes-la-Coquette, France), and a medium with sodium chloride (blood agar with salt). After careful debridement, a sample was taken from the incision surface by swab and subsequently the same area of the surface was imprinted onto filter paper. The samples were analyzed in the microbiology laboratory under standard safety precautions. The cultivation results of the two techniques were processed statistically using contingency tables and the McNemar test. Those samples that were simultaneously cultivation-positive by imprint and -negative by swabbing were processed in greater detail. Over the period between October 2008 and March 2013, 177 samples from 70 patients were analyzed. Sampling was carried out from 42 males and 28 females. One hundred forty-six samples were from incisions after operations (21 samples from six patients after operation on the thoracic cavity, 73 samples from 35 patients after operation on the abdominal cavity combined with the gastrointestinal tract, 52 samples from 19 patients with other surgical site infections not included above) and 31 samples from 11 patients with no post-operative infection. One patient had a sample taken both from a post-operative and a non-post-operative site. Coincidently, the most frequent cultivation finding with both techniques was a sterile one (imprint, 62; swab, 50). The microorganism cultivated most frequently after swabbing was Pseudomonas aeruginosa (22 cases), compared with Escherichia coli when the filter paper (imprint) was used (31 cases). The imprint technique was evaluated as more sensitive compared with swabbing (p=0.0001). The κ statistic used to evaluate the concordance between the two techniques was 0.302. Of the 177 samples there were 53 samples simultaneously sterile using the swab and positive in the imprint. In three samples colony- forming units (CFU) were not counted; 22 samples were within the limit of 0-25×10(1) CFU/cm(2), 20 samples within the limit of 25×10(1)-25×10(2) CFU/cm(2), five within the limit of 25×10(2)-25×10(3) CFU/cm(2), and three of more than 25×10(4) CFU/cm(2). The hypothesis of swabbing as a more precise technique was not confirmed. In our study the imprint technique was more sensitive than swabbing; the strength of agreement was fair. We obtained information not only on the type of the microorganism cultured, but also on the number of viable colonies, expressed in CFU/cm(2).
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A
2001-01-01
Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.
Random anisotropy model approach on ion beam sputtered Co 20Cu 80 granular alloy
NASA Astrophysics Data System (ADS)
Errahmani, H.; Hassanaı̈n, N.; Berrada, A.; Abid, M.; Lassri, H.; Schmerber, G.; Dinia, A.
2002-03-01
The Co 20Cu 80 granular film has been elaborated using ion beam sputtering technique. The magnetic properties of the sample were studied in the temperature range 5-300 K at H⩽50 kOe. From the thermomagnetisation curve, which is found to obey to the Bloch law, we have extracted the spin wave stiffness constant D and the exchange constant A. The magnetic experimental results have been interpreted in the framework of random anisotropy model. We have determined the local anisotropy constant KL and the local correlation length of anisotropy axis Ra, which is compared to the experimental grains size obtained by transmission electronic microscopy.
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
A Randomized Controlled Trial of the Group-Based Modified Story Memory Technique in TBI
2017-10-01
AWARD NUMBER: W81XWH-16-1-0726 TITLE: A Randomized Controlled Trial of the Group -Based Modified Story Memory Technique in TBI PRINCIPAL...2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER A Randomized Controlled Trial of the Group -Based Modified Story Memory Technique in TBI 5b. GRANT...forthcoming, The current study addresses this need through a double blind, placebo- controlled , randomized clinical trial (RCT) of a group
Peterman, Amber; Palermo, Tia M; Handa, Sudhanshu; Seidenfeld, David
2018-03-01
Social scientists have increasingly invested in understanding how to improve data quality and measurement of sensitive topics in household surveys. We utilize the technique of list randomization to collect measures of physical intimate partner violence in an experimental impact evaluation of the Government of Zambia's Child Grant Program. The Child Grant Program is an unconditional cash transfer, which targeted female caregivers of children under the age of 5 in rural areas to receive the equivalent of US $24 as a bimonthly stipend. The implementation results show that the list randomization methodology functioned as planned, with approximately 15% of the sample identifying 12-month prevalence of physical intimate partner violence. According to this measure, after 4 years, the program had no measurable effect on partner violence. List randomization is a promising approach to incorporate sensitive measures into multitopic evaluations; however, more research is needed to improve upon methodology for application to measurement of violence. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Bekey, G. A.
1971-01-01
Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.
Scaling Techniques for Combustion Device Random Vibration Predictions
NASA Technical Reports Server (NTRS)
Kenny, R. J.; Ferebee, R. C.; Duvall, L. D.
2016-01-01
This work presents compares scaling techniques that can be used for prediction of combustion device component random vibration levels with excitation due to the internal combustion dynamics. Acceleration and unsteady dynamic pressure data from multiple component test programs are compared and normalized per the two scaling approaches reviewed. Two scaling technique are reviewed and compared against the collected component test data. The first technique is an existing approach developed by Barrett, and the second technique is an updated approach new to this work. Results from utilizing both techniques are presented and recommendations about future component random vibration prediction approaches are given.
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
NASA Astrophysics Data System (ADS)
Sambeka, Yana; Nahadi, Sriyati, Siti
2017-05-01
The study aimed to obtain the scientific information about increase of student's concept mastering in project based learning that used authentic assessment. The research was conducted in May 2016 at one of junior high school in Bandung in the academic year of 2015/2016. The research method was weak experiment with the one-group pretest-posttest design. The sample was taken by random cluster sampling technique and the sample was 24 students. Data collected through instruments, i.e. written test, observation sheet, and questionnaire sheet. Student's concept mastering test obtained N-Gain of 0.236 with the low category. Based on the result of paired sample t-test showed that implementation of authentic assessment in the project based learning increased student's concept mastering significantly, (sig<0.05).
Thermal Conductivity of Polyimide/Carbon Nanofiller Blends
NASA Technical Reports Server (NTRS)
Ghose, S.; Watson, K. A.; Delozier, D. M.; Working, D. C.; Connell, J. W.; Smith, J. G.; Sun, Y. P.; Lin, Y.
2007-01-01
In efforts to improve the thermal conductivity (TC) of Ultem(TM) 1000, it was compounded with three carbon based nano-fillers. Multiwalled carbon nanotubes (MWCNT), vapor grown carbon nanofibers (CNF) and expanded graphite (EG) were investigated. Ribbons were extruded to form samples in which the nano-fillers were aligned. Samples were also fabricated by compression molding in which the nano-fillers were randomly oriented. The thermal properties were evaluated by DSC and TGA, and the mechanical properties of the aligned samples were determined by tensile testing. The degree of dispersion and alignment of the nanoparticles were investigated with high-resolution scanning electron microscopy. The thermal conductivity of the samples was measured in both the direction of alignment as well as perpendicular to that direction using the Nanoflash technique. The results of this study will be presented.
Thermal Conductivity of Polyimide/Nanofiller Blends
NASA Technical Reports Server (NTRS)
Ghose, S.; Watson, K. A.; Delozier, D. M.; Working, D. c.; Connell, J. W.; Smith, J. G.; Sun, Y. P.; Lin, Y.
2006-01-01
In efforts to improve the thermal conductivity of Ultem(TM) 1000, it was compounded with three carbon based nano-fillers. Multiwalled carbon nanotubes (MWCNT), vapor grown carbon nanofibers (CNF) and expanded graphite (EG) were investigated. Ribbons were extruded to form samples in which the nano-fillers were aligned. Samples were also fabricated by compression molding in which the nano-fillers were randomly oriented. The thermal properties were evaluated by DSC and TGA, and the mechanical properties of the aligned samples were determined by tensile testing. The degree of dispersion and alignment of the nanoparticles were investigated with high-resolution scanning electron microscopy. The thermal conductivity of the samples was measured in both the direction of alignment as well as perpendicular to that direction using the Nanoflash technique. The results of this study will be presented.
Group investigation with scientific approach in mathematics learning
NASA Astrophysics Data System (ADS)
Indarti, D.; Mardiyana; Pramudya, I.
2018-03-01
The aim of this research is to find out the effect of learning model toward mathematics achievement. This research is quasi-experimental research. The population of research is all VII grade students of Karanganyar regency in the academic year of 2016/2017. The sample of this research was taken using stratified cluster random sampling technique. Data collection was done based on mathematics achievement test. The data analysis technique used one-way ANOVA following the normality test with liliefors method and homogeneity test with Bartlett method. The results of this research is the mathematics learning using Group Investigation learning model with scientific approach produces the better mathematics learning achievement than learning with conventional model on material of quadrilateral. Group Investigation learning model with scientific approach can be used by the teachers in mathematics learning, especially in the material of quadrilateral, which is can improve the mathematics achievement.
Sayago, Ana; González-Domínguez, Raúl; Beltrán, Rafael; Fernández-Recamales, Ángeles
2018-09-30
This work explores the potential of multi-element fingerprinting in combination with advanced data mining strategies to assess the geographical origin of extra virgin olive oil samples. For this purpose, the concentrations of 55 elements were determined in 125 oil samples from multiple Spanish geographic areas. Several unsupervised and supervised multivariate statistical techniques were used to build classification models and investigate the relationship between mineral composition of olive oils and their provenance. Results showed that Spanish extra virgin olive oils exhibit characteristic element profiles, which can be differentiated on the basis of their origin in accordance with three geographical areas: Atlantic coast (Huelva province), Mediterranean coast and inland regions. Furthermore, statistical modelling yielded high sensitivity and specificity, principally when random forest and support vector machines were employed, thus demonstrating the utility of these techniques in food traceability and authenticity research. Copyright © 2018 Elsevier Ltd. All rights reserved.
Almansour, Mohammed; Sami, Waqas; Al-Rashedy, Oliyan Shoqer; Alsaab, Rayan Saad; Alfayez, Abdulrahman Saad; Almarri, Nawaf Rashed
2016-04-01
To determine the level of knowledge, attitude, and practice of food hygiene among primary, intermediate and high school students and explore association, if any, with socio-demographic differences. The observational cross-sectional study was conducted at boy's schools in Majmaah, Kingdom of Saudi Arabia, from February to May 2014. Data was collected using stratified random sampling technique from students aged 8-25 year. Two schools from each level (primary, intermediate and high school) were randomly selected and data was collected from the selected schools using simple random sampling method. A self-administered modified Sharif and Al-Malki questionnaire for knowledge, attitude and practice of food hygiene was used with Arabic translation. The mean age of 377 male students in the study was 14.53±2.647 years. Knowledge levels was less in primary school students compared to high school students (p=0.026). Attitude level was high in primary school students compared to intermediate school students (p< 0.001). No significant difference was observed between groups with regard to practice levels (p=0.152). The students exhibited good practice levels, despite fair knowledge and attitude levels.
NASA Astrophysics Data System (ADS)
Rudman, Reuben
1999-06-01
Wiley-VCH: New York, 1998. xxiv + 333 pp. ISBN 0-471-19458-1. $79.95. I would have subtitled this book "All You Ever Wanted To Know about ...Sample Preparation". Although its principal thrust is geared towards the analytical chemist in an X-ray diffraction (XRD) or X-ray fluorescence (XRF) service laboratory, this text will be of use primarily as a reference source in all milieus dealing with undergraduate research projects and advanced laboratory courses in physical and analytical chemistry. It contains dozens of suggestions for preparing randomly oriented small samples of nearly anything. For example, rocks and minerals, soft organics and hard ceramics, radioactive and liquid materials, metals and oils are all treated. As the availability of XRD and XRF equipment has increased, so has the use of these techniques in the teaching schedule. Many undergraduate laboratory and research projects utilizing these methods have been described in the literature and are found in laboratory textbooks. Very often, especially with the increasingly common use of automated computer-controlled instrumentation, sample preparation has become the key experimental technique required for successful data collection. However, it is not always easy to prepare the statistically random distribution of small particles (crystallites) that is required by these methods. A multitude of techniques have been developed over the past 70 years, but many of them have been handed down by word of mouth or are scattered throughout the literature. This book represents an attempt to systematically describe the theory and practice of sample preparation. This excellent guide to the intricacies of sample preparation begins with a description of statistical sampling methods and the principles of grinding techniques. After a discussion of XRF specimen preparation, which includes pressing pellets, fusion methods, crucible selection and handling very small samples, detailed descriptions for handling rocks, minerals, cements, metals, oils, and vegetation [sic] are given. The preparation of XRD samples is described for various diffraction equipment geometries (utilizing both counter and film detectors), including specific information regarding the use of flat specimens and slurries, the use of internal standards, and the effects of crystallite size on the diffraction pattern. Methods for handling ceramics, clays, zeolites, air-sensitive samples, thin films, and plastics are described, along with the special handling requirements for materials to be studied by high-pressure, high-temperature, or low-temperature techniques. One whole chapter is devoted to the equipment used in specimen preparation, including grinders, pulverizers, presses, specimen holders, repair of platinumware, and sources of all types of special equipment. Did you ever want to know where to get a Plattner steel mortar or a micronizing mill or soft-glass capillary tubes with 0.01-mm wall thickness? It's all here in this monograph. The book ends with a good glossary of terms, a general bibliography in addition to the extensive list of references following each of its 9 chapters, and an index. It will be of help in many areas of spectroscopy and analytical chemistry, as well as in XRD and XRF analyses.
Gao, Shutao; Lv, Zhengtao; Fang, Huang
2018-04-01
Several studies have revealed that robot-assisted technique might improve the pedicle screw insertion accuracy, but owing to the limited sample sizes in the individual study reported up to now, whether or not robot-assisted technique is superior to conventional freehand technique is indefinite. Thus, we performed this systematic review and meta-analysis based on randomized controlled trials to assess which approach is better. Electronic databases including PubMed, EMBASE, CENTRAL, ISI Web of Science, CNKI and WanFang were systematically searched to identify potentially eligible articles. Main endpoints containing the accuracy of pedicle screw implantation and proximal facet joint violation were evaluated as risk ratio (RR) and the associated 95% confidence intervals (95% CIs), while radiation exposure and surgical duration were presented as mean difference (MD) or standard mean difference (SMD). Meta-analyses were performed using RevMan 5.3 software. Six studies involving 158 patients (688 pedicle screws) in robot-assisted group and 148 patients (672 pedicle screws) in freehand group were identified matching our study. The Grade A accuracy rate in robot-assisted group was superior to freehand group (RR 1.03, 95% CI 1.00, 1.06; P = 0.04), but the Grade A + B accuracy rate did not differ between the two groups (RR 1.01, 95% CI 0.99, 1.02; P = 0.29). With regard to proximal facet joint violation, the combined results suggested that robot-assisted group was associated with significantly fewer proximal facet joint violation than freehand group (RR 0.07, 95% CI 0.01, 0.55; P = 0.01). As was the radiation exposure, our findings suggested that robot-assisted technique could significantly reduce the intraoperative radiation time (MD - 12.38, 95% CI - 17.95, - 6.80; P < 0.0001) and radiation dosage (SMD - 0.64, 95% CI - 0.85, - 0.43; P < 0.00001). But the overall surgical duration was longer in robot-assisted group than conventional freehand group (MD 20.53, 95% CI 5.17, 35.90; P = 0.009). The robot-assisted technique was associated with equivalent accuracy rate of pedicle screw implantation, fewer proximal facet joint violation, less intraoperative radiation exposure but longer surgical duration than freehand technique. Powerful evidence relies on more randomized controlled trials with high quality and larger sample size in the future.
Image analysis of oronasal fistulas in cleft palate patients acquired with an intraoral camera.
Murphy, Tania C; Willmot, Derrick R
2005-01-01
The aim of this study was to examine the clinical technique of using an intraoral camera to monitor the size of residual oronasal fistulas in cleft lip-cleft palate patients, to assess its repeatability on study casts and patients, and to compare its use with other methods. Seventeen plaster study casts of cleft palate patients with oronasal fistulas obtained from a 5-year series of 160 patients were used. For the clinical study, 13 patients presenting in a clinic prospectively over a 1-year period were imaged twice by the camera. The area of each fistula on each study cast was measured in the laboratory first using a previously described graph paper and caliper technique and second with the intraoral camera. Images were imported into a computer and subjected to image enhancement and area measurement. The camera was calibrated by imaging a standard periodontal probe within the fistula area. The measurements were repeated using a double-blind technique on randomly renumbered casts to assess the repeatability of measurement of the methods. The clinical images were randomly and blindly numbered and subjected to image enhancement and processing in the same way as for the study casts. Area measurements were computed. Statistical analysis of repeatability of measurement using a paired sample t test showed no significant difference between measurements, indicating a lack of systematic error. An intraclass correlation coefficient of 0.97 for the graph paper and 0.84 for the camera method showed acceptable random error between the repeated records for each of the two methods. The graph paper method remained slightly more repeatable. The mean fistula area of the study casts between each method was not statistically different when compared with a paired samples t test (p = 0.08). The methods were compared using the limits of agreement technique, which showed clinically acceptable repeatability. The clinical study of repeated measures showed no systematic differences when subjected to a t test (p = 0.109) and little random error with an intraclass correlation coefficient of 0.98. The fistula size seen in the clinical study ranged from 18.54 to 271.55 mm. Direct measurements subsequently taken on 13 patients in the clinic without study models showed a wide variation in the size of residual fistulas presenting in a multidisciplinary clinic. It was concluded that an intraoral camera method could be used in place of the previous graph paper method and could be developed for clinical and scientific purposes. This technique may offer advantages over the graph paper method, as it facilitates easy visualization of oronasal fistulas and objective fistulas size determination and permits easy storage of data in clinical records.
Gas chromatographic concepts for the analysis of planetary atmospheres
NASA Technical Reports Server (NTRS)
Valentin, J. R.; Cullers, D. K.; Hall, K. W.; Krekorian, R. L.; Phillips, J. B.
1991-01-01
Over the last few years, new gas chromatographic (GC) concepts were developed for use on board spacecraft or any other restricted environments for determining the chemical composition of the atmosphere and surface material of various planetary bodies. Future NASA Missions include an entry probe that will be sent to Titan and various spacecraft that will land on Mars. In order to be able to properly respond to the mission science requirements and physical restrictions imposed on the instruments by these missions, GC analytical techniques are being developed. Some of these techniques include hardware and mathematical techniques that will improve GC sensitivity and increase the sampling rate of a GC descending through a planetary atmosphere. The technique of Multiplex Gas Chromatography (MGC) is an example of a technique that was studied in a simulated Titan atmosphere. In such an environment, the atmospheric pressure at instrument deployment is estimated to be a few torr. Thus, at such pressures, the small amount of sample that is acquired might not be enough to satisfy the detection requirements of the gas chromatograph. In MGC, many samples are pseudo-randomly introduced to the chromatograph without regard to elution of preceding components. The resulting data is then reduced using mathematical techniques such as cross-correlation of Fourier Transforms. Advantages realized from this technique include: improvement in detection limits of several orders of magnitude and increase in the number of analyses that can be conducted in a given period of time. Results proving the application of MGC at very low pressures emulating the same atmospheric pressures that a Titan Probe will encounter when the instruments are deployed are presented. The sample used contained hydrocarbons that are expected to be found in Titan's atmosphere. In addition, a new selective modulator was developed to monitor water under Martian atmospheric conditions. Since this modulator is selective only to water, the need for a GC column is eliminated. This results in further simplification of the instrument.
Sampling And Resolution Enhancement Techniques For The Infrared Analysis Of Adsorbed Proteins.
NASA Astrophysics Data System (ADS)
Fuller, Michael P.; Singh, Bal R.
1989-12-01
In this report, we have analyzed the secondary structures of the dichain form of tetanus neurotoxin using. FT-IR and circular dichroic spectroscopies for a-helix, β-sheets, β-turns and random coils. These results indicate that the secondary structures are significantly different from those reported in earlier studies in that it shows much higher content of ordered structures (~50%) which could be significant for the function of the neurotoxin.
ERIC Educational Resources Information Center
Maina, Ndonga James; Orodho, John Aluko
2016-01-01
The thrust of this study was to examine the level of adequacy of current sources in facilitating access and participation in adult education centres in Murang'a South Sub-County, Murang'a County, Kenya. The study adopted the descriptive survey design. Combinations of purposive and stratified random sampling techniques were used to select 82…
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
A review of Indian research on cognitive remediation for schizophrenia.
Hegde, Shantala
2017-02-01
Cognitive deficits play a central role in recovery from Schizophrenia (SZ). Cognitive remediation (CR) is increasingly being examined to improve cognitive functions in SZ. It is becoming an inevitable component of treatment for this debilitating illness. This review article presents the current status of research on CR for SZ in India. In contrast to the numerous studies reported from across the globe, there are only five studies on CR for SZ published from India. Of the five, only two are randomized controlled trials, two are non-randomized studies and one is a series of case reports. With different strategies used for CR and a variety of tools and measurements as outcome measures, combined analysis of the data was not feasible. Improvement in cognitive functions and sustenance of the improvement observed at follow-up period ranging from 2 to 6 months duration was underscored by all the four studies. Indigenous methods such as home-based CR techniques and Yoga therapy as an adjunct CR technique have been researched upon. Established method of CR such as the Integrated Psychological Therapy (IPT) has been used with modifications made to suit the cultural scenario. Other treatment methods such as family therapy have been added along with CR for chronic patients. The limited number of research studies has tried to encompass various dimensions. However, there is a dire need for studies with larger sample size with stringent research methods. Culturally feasible CR technique and multi-centric studies with larger sample size can be the next way forward. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1982-01-01
An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.
Long-Term Maintenance of Pharmacists' Inhaler Technique Demonstration Skills
Armour, Carol L; Reddel, Helen K; Bosnic-Anticevich, Sinthia Z
2009-01-01
Objective To assess the effectiveness of a single educational intervention, followed by patient education training, in pharmacists retaining their inhaler technique skills. Methods A convenience sample of 31 pharmacists attended an educational workshop and their inhaler techniques were assessed. Those randomly assigned to the active group were trained to assess and teach correct Turbuhaler and Diskus inhaler techniques to patients and provided with patient education tools to use in their pharmacies during a 6-month study. Control pharmacists delivered standard care. All pharmacists were reassessed 2 years after initial training. Results Thirty-one pharmacists participated in the study. At the initial assessment, few pharmacists demonstrated correct technique (Turbuhaler:13%, Diskus:6%). All pharmacists in the active group demonstrated correct technique following training. Two years later, pharmacists in the active group demonstrated significantly better inhaler technique than pharmacists in the control group (p < 0.05) for Turbuhaler and Diskus (83% vs.11%; 75% vs.11%, respectively). Conclusion Providing community pharmacists with effective patient education tools and encouraging their involvement in educating patients may contribute to pharmacists maintaining their competence in correct inhaler technique long-term. PMID:19513170
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Okoyo, Collins; Simiyu, Elses; Njenga, Sammy M; Mwandawiro, Charles
2018-04-11
Kato-Katz technique has been the mainstay test in Schistosoma mansoni diagnosis in endemic areas. However, recent studies have documented its poor sensitivity in evaluating Schistosoma mansoni infection especially in areas with lower rates of transmission. It's the primary diagnostic tool in monitoring impact of the Kenya national school based deworming program on infection transmission, but there is need to consider a more sensitive technique as the prevalence reduces. Therefore, this study explored the relationship between results of the stool-based Kato-Katz technique with urine-based point-of-care circulating cathodic antigen (POC-CCA) test in view to inform decision-making by the program in changing from Kato-Katz to POC-CCA test. We used two cross-sectional surveys conducted pre- and post- mass drug administration (MDA) using praziquantel in a representative random sample of children from 18 schools across 11 counties. A total of 1944 children were randomly sampled for the study. Stool and urine samples were tested for S. mansoni infection using Kato-Katz and POC-CCA methods, respectively. S. mansoni prevalence using each technique was calculated and 95% confidence intervals obtained using binomial regression model. Specificity (Sp) and sensitivity (Sn) were determined using 2 × 2 contingency tables and compared using the McNemar's chi-square test. A total of 1899 and 1878 children were surveyed at pre- and post-treatment respectively. S. mansoni infection prevalence was 26.5 and 21.4% during pre- and post-treatment respectively using POC-CCA test, and 4.9 and 1.5% for pre- and post-treatment respectively using Kato-Katz technique. Taking POC-CCA as the gold standard, Kato-Katz was found to have significantly lower sensitivity both at pre- and post-treatment, Sn = 12.5% and Sn = 5.2% respectively, McNemar test χ 2 m = 782.0, p < 0.001. In overall, the results showed a slight/poor agreement between the two methods, kappa index (k) = 0.11, p < 0.001, inter-rater agreement = 77.1%. Results showed POC-CCA technique as an effective, sensitive and accurate screening tool for Schistosoma mansoni infection in areas of low prevalence. It was up to 14-fold accurate than Kato-Katz which had extremely inadequate sensitivity. We recommend usage of POC-CCA alongside Kato-Katz examinations by Schistosomiasis control programs in low prevalence areas.
Flight test evaluation of predicted light aircraft drag, performance, and stability
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Fox, S. R.
1979-01-01
A technique was developed which permits simultaneous extraction of complete lift, drag, and thrust power curves from time histories of a single aircraft maneuver such as a pull up (from V max to V stall) and pushover (to V max for level flight). The technique, which is an extension of nonlinear equations of motion of the parameter identification methods of Iliff and Taylor and includes provisions for internal data compatibility improvement as well, was shown to be capable of correcting random errors in the most sensitive data channel and yielding highly accurate results. Flow charts, listings, sample inputs and outputs for the relevant routines are provided as appendices. This technique was applied to flight data taken on the ATLIT aircraft. Lack of adequate knowledge of the correct full throttle thrust horsepower true airspeed variation and considerable internal data inconsistency made it impossible to apply the trajectory matching features of the technique.
Kumar, Ajay; Kaur, Manpreet; Mehra, Rohit; Sharma, Dinesh Kumar; Mishra, Rosaline
2017-10-01
The level of radon concentration has been assessed using the Advanced SMART RnDuo technique in 30 drinking water samples from Jammu district, Jammu and Kashmir, India. The water samples were collected from wells, hand pumps, submersible pumps, and stored waters. The randomly obtained 14 values of radon concentration in water sources using the SMART RnDuo technique have been compared and cross checked by a RAD7 device. A good positive correlation (R = 0.88) has been observed between the two techniques. The overall value of radon concentration in various water sources has ranged from 2.45 to 18.43 Bq L, with a mean value of 8.24 ± 4.04 Bq L, and it agreed well with the recommended limit suggested by the European Commission and UNSCEAR. However, the higher activity of mean radon concentration was found in groundwater drawn from well, hand and submersible pumps as compared to stored water. The total annual effective dose due to radon inhalation and ingestion ranged from 6.69 to 50.31 μSv y with a mean value of 22.48 ± 11.03 μSv y. The total annual effective dose was found to lie within the safe limit (100 μSv y) suggested by WHO. Heavy metal analysis was also carried out in various water sources by using an atomic absorption spectrophotometer (AAS), and the highest value of heavy metals was found mostly in groundwater samples. The obtained results were compared with Indian and International organizations like WHO and the EU Council. Among all the samples, the elemental analysis is not on the exceeding side of the permissible limit.
Ozone measurement system for NASA global air sampling program
NASA Technical Reports Server (NTRS)
Tiefermann, M. W.
1979-01-01
The ozone measurement system used in the NASA Global Air Sampling Program is described. The system uses a commercially available ozone concentration monitor that was modified and repackaged so as to operate unattended in an aircraft environment. The modifications required for aircraft use are described along with the calibration techniques, the measurement of ozone loss in the sample lines, and the operating procedures that were developed for use in the program. Based on calibrations with JPL's 5-meter ultraviolet photometer, all previously published GASP ozone data are biased high by 9 percent. A system error analysis showed that the total system measurement random error is from 3 to 8 percent of reading (depending on the pump diaphragm material) or 3 ppbv, whichever are greater.
Raina, Sunil Kumar; Mengi, Vijay; Singh, Gurdeep
2012-07-01
Breast feeding is universally and traditionally practicised in India. Experts advocate breast feeding as the best method of feeding young infants. To assess the role of various factors in determining colostrum feeding in block R. S. Pura of district Jammu. A stratified two-stage design with villages as the primary sampling unit and lactating mothers as secondary sampling unit. Villages were divided into different clusters on the basis of population and sampling units were selected by a simple random technique. Breastfeeding is almost universal in R. S. Pura. Differentials in discarding the first milk were not found to be important among various socioeconomic groups and the phenomenon appeared more general than specific.
Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers
NASA Astrophysics Data System (ADS)
Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan
2018-04-01
The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1 × 2.1 × 2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D-MRI with high temporal and spatial resolution within short scan time for visualization of organ or tumor motion during free breathing. Further studies, e.g. the application of the method for radiotherapy planning are needed to investigate the clinical applicability and diagnostic value of the approach.
O'Hara, R P; Palazotto, A N
2012-12-01
To properly model the structural dynamics of the forewing of the Manduca sexta species, it is critical that the material and structural properties of the biological specimen be understood. This paper presents the results of a morphological study that has been conducted to identify the material and structural properties of a sample of male and female Manduca sexta specimens. The average mass, area, shape, size and camber of the wing were evaluated using novel measurement techniques. Further emphasis is placed on studying the critical substructures of the wing: venation and membrane. The venation cross section is measured using detailed pathological techniques over the entire venation of the wing. The elastic modulus of the leading edge veins is experimentally determined using advanced non-contact structural dynamic techniques. The membrane elastic modulus is randomly sampled over the entire wing to determine global material properties for the membrane using nanoindentation. The data gathered from this morphological study form the basis for the replication of future finite element structural models and engineered biomimetic wings for use with flapping wing micro air vehicles.
Bingi, Jayachandra; Murukeshan, Vadakke Matham
2015-12-18
Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices.
NASA Astrophysics Data System (ADS)
Peters, Aaron; Brown, Michael L.; Kay, Scott T.; Barnes, David J.
2018-03-01
We use a combination of full hydrodynamic and dark matter only simulations to investigate the effect that supercluster environments and baryonic physics have on the matter power spectrum, by re-simulating a sample of supercluster sub-volumes. On large scales we find that the matter power spectrum measured from our supercluster sample has at least twice as much power as that measured from our random sample. Our investigation of the effect of baryonic physics on the matter power spectrum is found to be in agreement with previous studies and is weaker than the selection effect over the majority of scales. In addition, we investigate the effect of targeting a cosmologically non-representative, supercluster region of the sky on the weak lensing shear power spectrum. We do this by generating shear and convergence maps using a line-of-sight integration technique, which intercepts our random and supercluster sub-volumes. We find the convergence power spectrum measured from our supercluster sample has a larger amplitude than that measured from the random sample at all scales. We frame our results within the context of the Super-CLuster Assisted Shear Survey (Super-CLASS), which aims to measure the cosmic shear signal in the radio band by targeting a region of the sky that contains five Abell clusters. Assuming the Super-CLASS survey will have a source density of 1.5 galaxies arcmin-2, we forecast a detection significance of 2.7^{+1.5}_{-1.2}, which indicates that in the absence of systematics the Super-CLASS project could make a cosmic shear detection with radio data alone.
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
A Comparison of Techniques for Scheduling Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2004-01-01
Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.
Students’ conceptual understanding consistency of heat and temperature
NASA Astrophysics Data System (ADS)
Slamet Budiarti, Indah; Suparmi; Sarwanto; Harjana
2017-01-01
The aims of the research were to explore and to describe the consistency of students’ understanding of heat and temperature concept. The sample that was taken using purposive random sampling technique consisted of 99 high school students from 3 senior high schools in Jayapura city. The descriptive qualitative method was employed in this study. The data were collected using tests and interviews regarding the subject matters of Heat and Temperature. Based on the results of data analysis, it was concluded that 3.03% of the students was the consistency of right answer, 79.80% of the students was consistency but wrong answer and 17.17% of the students was inconsistency.
Dodel, M; Hemmati Nejad, N; Bahrami, S H; Soleimani, M; Hanaee-Ahvaz, H
2016-08-31
Tissue reconstruction is among the increasing applications of polymer nanofibers. Fibrous scaffolds (mats) can be easily produced using the electrospinning method with structure and biomechanical properties similar to those of a cellular matrix. Electrospinning is widely used in the production of nanofibers and the GAP-method electrospinning is one of the means of producing fully aligned nanofibers. In this research, using the GAP-method, knitted fibrous scaffolds were made of silk fibroin, which is a biocompatible and biodegradable polymer. To extract fibroin from cocoons, the sodium chloride solution as well as dialysis and freeze-drying techniques were employed. The molecular weight of the extracted fibroin was measured with the SDS-Page electrophoresis technique. Moreover, the pure fibroin structure was examined using the ATR-FTIR method, and the viscosity of the solution used for electrospinning was measured with the Brookfield rotational viscometer. The scaffolds were prepared through electrospinning of the silk fibroin in pure formic acid solution. The following three structures were electrospun: 1) a random structure; 2) a knitted structure with an interstitial angle of 60 degrees; 3) a knitted structure with an interstitial angle of 90 degrees. Morphology of the resulting fibers was studied with a SEM (scanning electron microscope). Fibroin scaffolds are degradable in water. Therefore, they were fixated through immersion in methanol to be prepared for assays. The mechanical properties of the scaffolds were also studied using a tensile strength test device. The effect of methanol on the strength properties of the samples was also assessed. The hydrophilic potential of the samples was measured via a contact angle test. To increase the hydrophilicity of the scaffold surfaces, the cold oxygen plasma technique was employed. Finally, the biocompatibility and cell adhesion of the resulting scaffolds were examined through a HEK 293 cell culture, and the results were analyzed through the MTT, DAPI staining, and SEM imaging techniques. Results revealed that the oriented knitted structure contributed to the increase in Young's modulus and the maximum strength of scaffolds as compared to the random samples. Moreover, this structure can also be a suitable alternative to the typical chemical means of increasing strength.
Hornbrook, M C; Goodman, M J
1996-01-01
OBJECTIVE. The goal of this study was to develop unbiased risk-assessment models to be used for paying health plans on the basis of enrollee health status and use propensity. We explored the risk structure of adult employed HMO members using self-reported morbidities, functional status, perceived health status, and demographic characteristics. DATA SOURCES/STUDY SETTING. Data were collected on a random sample of members of a large, federally qualified, prepaid group practice, hospital-based HMO located in the Pacific Northwest. STUDY DESIGN. Multivariate linear nonparametric techniques were used to estimate risk weights on demographic, morbidity, and health status factors at the individual level. The dependent variable was annual real total health plan expense for covered services for the year following the survey. Repeated random split-sample validation techniques minimized outlier influences and avoided inappropriate distributional assumptions required by parametric techniques. DATA COLLECTION/EXTRACTION METHODS. A mail questionnaire containing an abbreviated medical history and the RAND-36 Health Survey was administered to a 5 percent sample of adult subscribers and their spouses in 1990 and 1991, with an overall 44 percent response rate. Utilization data were extracted from HMO automated information systems. Annual expenses were computed by weighting all utilization elements by standard unit costs for the HMO. PRINCIPAL FINDINGS. Prevalence of such major chronic diseases as heart disease, diabetes, depression, and asthma improve prediction of future medical expense; functional health status and morbidities are each better than simple demographic factors alone; functional and perceived health status as well as demographic characteristics and diagnoses together yield the best prediction performance and reduce opportunities for selection bias. We also found evidence of important interaction effects between functional/perceived health status scales and disease classes. CONCLUSIONS. Self-reported morbidities and functional health status are useful risk measures for adults. Risk-assessment research should focus on combining clinical information with social survey techniques to capitalize on the strengths of both approaches. Disease-specific functional health status scales should be developed and tested to capture the most information for prediction. PMID:8698586
Piezoelectric Versus Conventional Rotary Techniques for Impacted Third Molar Extraction
Jiang, Qian; Qiu, Yating; Yang, Chi; Yang, Jingyun; Chen, Minjie; Zhang, Zhiyuan
2015-01-01
Abstract Impacted third molars are frequently encountered in clinical work. Surgical removal of impacted third molars is often required to prevent clinical symptoms. Traditional rotary cutting instruments are potentially injurious, and piezosurgery, as a new osteotomy technique, has been introduced in oral and maxillofacial surgery. No consistent conclusion has been reached regarding whether this new technique is associated with fewer or less severe postoperative sequelae after third molar extraction. The aim of this study was to compare piezosurgery with rotary osteotomy techniques, with regard to surgery time and the severity of postoperative sequelae, including pain, swelling, and trismus. We conducted a systematic literature search in the Cochrane Library, PubMed, Embase, and Google Scholar. The eligibility criteria of this study included the following: the patients were clearly diagnosed as having impacted mandibular third molars; the patients underwent piezosurgery osteotomy, and in the control group rotary osteotomy techniques, for removing impacted third molars; the outcomes of interest include surgery time, trismus, swelling or pain; the studies are randomized controlled trials. We used random-effects models to calculate the difference in the outcomes, and the corresponding 95% confidence interval. We calculated the weighted mean difference if the trials used the same measurement, and a standardized mean difference if otherwise. A total of seven studies met the eligibility criteria and were included in our analysis. Compared with rotary osteotomy, patients undergoing piezosurgery experienced longer surgery time (mean difference 4.13 minutes, 95% confidence interval 2.75–5.52, P < 0.0001). Patients receiving the piezoelectric technique had less swelling at postoperative days 1, 3, 5, and 7 (all Ps ≤0.023). Additionally, there was a trend of less postoperative pain and trismus in the piezosurgery groups. The number of included randomized controlled trials and the sample size of each trial were relatively small, double blinding was not possible, and cost analysis was unavailable due to a lack of data. Our meta-analysis indicates that although patients undergoing piezosurgery experienced longer surgery time, they had less postoperative swelling, indicating that piezosurgery is a promising alternative technique for extraction of impacted third molars. PMID:26469902
Yang, G; Ding, J; Wu, L R; Duan, Y D; Li, A Y; Shan, J Y; Wu, Y X
2015-03-13
DNA fingerprinting is both a popular and important technique with several advantages in plant cultivar identification. However, this technique has not been used widely and efficiently in practical plant identification because the analysis and recording of data generated from fingerprinting and genotyping are tedious and difficult. We developed a novel approach known as a cultivar identification diagram (CID) strategy that uses DNA markers to separate plant individuals in a more efficient, practical, and referable manner. A CID was manually constructed and a polymorphic marker was generated from each polymerase chain reaction for sample separation. In this study, 67 important sea buckthorn cultivars cultivated in China were successfully separated with random amplified polymorphic DNA markers using the CID analysis strategy, with only seven 11-nucleotide primers employed. The utilization of the CID of these 67 sea buckthorn cultivars was verified by identifying 2 randomly chosen groups of cultivars among the 67 cultivars. The main advantages of this identification strategy include fewer primers used and separation of all cultivars using the corresponding primers. This sea buckthorn CID was able to separate any sea buckthorn cultivars among the 67 studied, which is useful for sea buckthorn cultivar identification, cultivar-right-protection, and for the sea buckthorn nursery industry in China.
Chiaravalloti, Nancy D; Dobryakova, Ekaterina; Wylie, Glenn R; DeLuca, John
2015-01-01
New learning and memory deficits are common following traumatic brain injury (TBI). Yet few studies have examined the efficacy of memory retraining in TBI through the most methodologically vigorous randomized clinical trial. Our previous research has demonstrated that the modified Story Memory Technique (mSMT) significantly improves new learning and memory in multiple sclerosis. The present double-blind, placebo-controlled, randomized clinical trial examined changes in cerebral activation on functional magnetic resonance imaging following mSMT treatment in persons with TBI. Eighteen individuals with TBI were randomly assigned to treatment (n = 9) or placebo (n = 9) groups. Baseline and follow-up functional magnetic resonance imaging was collected during a list-learning task. Significant differences in cerebral activation from before to after treatment were noted in regions belonging to the default mode network and executive control network in the treatment group only. Results are interpreted in light of these networks. Activation differences between the groups likely reflect increased use of strategies taught during treatment. This study demonstrates a significant change in cerebral activation resulting from the mSMT in a TBI sample. Findings are consistent with previous work in multiple sclerosis. Behavioral interventions can show significant changes in the brain, validating clinical utility.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Dean, A A; Bark, J E; Sherriff, A; Macpherson, L M D; Cairns, A
2011-06-01
To assess the current awareness, usage and opinion of the Hall technique as a restorative option for primary molars in Scottish general dental practice; and to identify preferences for methods of further training, if desired, for those not currently using the technique. A postal questionnaire was sent to a random sample of Scottish general dental practitioners (GDPs) (n= 1207). Half of all GDPs within each health board were mailed. All analyses have been carried out in Minitab (version 15). The study is primarily descriptive and uses frequency distributions and cross-tabulations. Percentages are reported with p5% confidence intervals. Characteristics of the whole sample were reported. However when reporting the use of the Hall technique, only those GDP's reporting to treat children, at least sometimes are considered. Following two mail-shots, the overall response rate was 59% (715/1207). Eighty-six percent (616/715) of respondents were aware of the Hall technique as a method of restoring primary molars and 48 % (n=318) were currently using the Hall technique. Of those GDPs who never used the Hall technique (51% of total respondents; n=340), 46% (n=157) indicated they were either 'very interested' or 'interested' in adopting the Hall technique into their clinical practice. The preferred source for further training was via a section 63 continuing professional development (CPD) course, incorporating a practical element. Of those GDPs in Scotland who responded to the questionnaire, an unexpectedly high number were already using the Hall technique in their practice, and among those not currently using it, there is a demand for training.
Song, Min; Yu, Hwanjo; Han, Wook-Shin
2011-11-24
Protein-protein interaction (PPI) extraction has been a focal point of many biomedical research and database curation tools. Both Active Learning and Semi-supervised SVMs have recently been applied to extract PPI automatically. In this paper, we explore combining the AL with the SSL to improve the performance of the PPI task. We propose a novel PPI extraction technique called PPISpotter by combining Deterministic Annealing-based SSL and an AL technique to extract protein-protein interaction. In addition, we extract a comprehensive set of features from MEDLINE records by Natural Language Processing (NLP) techniques, which further improve the SVM classifiers. In our feature selection technique, syntactic, semantic, and lexical properties of text are incorporated into feature selection that boosts the system performance significantly. By conducting experiments with three different PPI corpuses, we show that PPISpotter is superior to the other techniques incorporated into semi-supervised SVMs such as Random Sampling, Clustering, and Transductive SVMs by precision, recall, and F-measure. Our system is a novel, state-of-the-art technique for efficiently extracting protein-protein interaction pairs.
Effect of airway clearance techniques on the efficacy of the sputum induction procedure.
Elkins, M R; Lane, T; Goldberg, H; Pagliuso, J; Garske, L A; Hector, E; Marchetto, L; Alison, J A; Bye, P T P
2005-11-01
Sputum induction is used in the early identification of tuberculosis (TB) and pneumocystis infections of the lung. Although manual physiotherapy techniques to clear the airways are often incorporated in the sputum induction procedure, their efficacy in this setting is unknown. This randomised, crossover trial enrolled adults referred for sputum induction for suspected TB and pneumocystis infections of the lung. All participants underwent two sputum induction procedures, inhaling 3% saline via ultrasonic nebuliser. During one randomly allocated procedure, airway clearance techniques (chest wall percussion, vibration, huffing) were incorporated. In total, 59 participants completed the trial. The airway clearance techniques had no significant effect on how the test was tolerated, the volume expectorated or the quality of the sample obtained (assessed by the presence of alveolar macrophages). The techniques did not significantly affect how often the test identified a suspected organism, nor the sensitivity or specificity of sputum induction. In conclusion, the study was unable to demonstrate any effect of airway clearance techniques on the sputum induction procedure. The results provide some justification for not including airway clearance techniques as part of the sputum induction procedure.
Communication: Multiple atomistic force fields in a single enhanced sampling simulation
NASA Astrophysics Data System (ADS)
Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.
2015-07-01
The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.
Annealing of Co-Cr dental alloy: effects on nanostructure and Rockwell hardness.
Ayyıldız, Simel; Soylu, Elif Hilal; Ide, Semra; Kılıç, Selim; Sipahi, Cumhur; Pişkin, Bulent; Gökçe, Hasan Suat
2013-11-01
The aim of the study was to evaluate the effect of annealing on the nanostructure and hardness of Co-Cr metal ceramic samples that were fabricated with a direct metal laser sintering (DMLS) technique. Five groups of Co-Cr dental alloy samples were manufactured in a rectangular form measuring 4 × 2 × 2 mm. Samples fabricated by a conventional casting technique (Group I) and prefabricated milling blanks (Group II) were examined as conventional technique groups. The DMLS samples were randomly divided into three groups as not annealed (Group III), annealed in argon atmosphere (Group IV), or annealed in oxygen atmosphere (Group V). The nanostructure was examined with the small-angle X-ray scattering method. The Rockwell hardness test was used to measure the hardness changes in each group, and the means and standard deviations were statistically analyzed by one-way ANOVA for comparison of continuous variables and Tukey's HSD test was used for post hoc analysis. P values of <.05 were accepted as statistically significant. The general nanostructures of the samples were composed of small spherical entities stacked atop one another in dendritic form. All groups also displayed different hardness values depending on the manufacturing technique. The annealing procedure and environment directly affected both the nanostructure and hardness of the Co-Cr alloy. Group III exhibited a non-homogeneous structure and increased hardness (48.16 ± 3.02 HRC) because the annealing process was incomplete and the inner stress was not relieved. Annealing in argon atmosphere of Group IV not only relieved the inner stresses but also decreased the hardness (27.40 ± 3.98 HRC). The results of fitting function presented that Group IV was the most homogeneous product as the minimum bilayer thickness was measured (7.11 Å). After the manufacturing with DMLS technique, annealing in argon atmosphere is an essential process for Co-Cr metal ceramic substructures. The dentists should be familiar with the materials that are used in clinic for prosthodontics treatments.
ERIC Educational Resources Information Center
Jacob, Sunday
2015-01-01
This study examined the pattern of students/teachers' population in schools as a result of the crises witnessed in Jos and its consequences on quality of teaching as well as peaceful living in Jos. Stratified simple random sampling technique was used to select the 18 schools that were used for this study. Questionnaire was used to collect…
Experimental toxicology: Issues of statistics, experimental design, and replication.
Briner, Wayne; Kirwan, Jeral
2017-01-01
The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.
Diagnosis and treatment of academic frustration syndrome.
Grover, P L; Tessier, K E
1978-09-01
A random sample of medical students was compared with others who were unable to cope with unanticipated academic frustration. The noncoping students demonstrated higher levels of debilitating anxiety and lower levels of facilitating anxiety while their perceptions of locus of control tended to be more extreme. No difference was found in study habits and attitudes. Techniques of attribution therapy and desensitization in the counseling of these students proved to be effective in improving coping behavior in six of seven cases.
ERIC Educational Resources Information Center
Khanehkeshi, Ali; Ahmedi, Farahnaz Azizi Tas
2013-01-01
The purpose of this study was to compare self-efficacy and self-regulation between the students with SRB and students with NSRB, and the relationship of these variables to academic performance. Using a random stratified sampling technique 60 girl students who had school refusal behavior (SRB) and 60 of students without SRB were selected from 8…
NASA Astrophysics Data System (ADS)
Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas
2016-09-01
Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.
Nonuniform sampling theorems for random signals in the linear canonical transform domain
NASA Astrophysics Data System (ADS)
Shuiqing, Xu; Congmei, Jiang; Yi, Chai; Youqiang, Hu; Lei, Huang
2018-06-01
Nonuniform sampling can be encountered in various practical processes because of random events or poor timebase. The analysis and applications of the nonuniform sampling for deterministic signals related to the linear canonical transform (LCT) have been well considered and researched, but up to now no papers have been published regarding the various nonuniform sampling theorems for random signals related to the LCT. The aim of this article is to explore the nonuniform sampling and reconstruction of random signals associated with the LCT. First, some special nonuniform sampling models are briefly introduced. Second, based on these models, some reconstruction theorems for random signals from various nonuniform samples associated with the LCT have been derived. Finally, the simulation results are made to prove the accuracy of the sampling theorems. In addition, the latent real practices of the nonuniform sampling for random signals have been also discussed.
NASA Astrophysics Data System (ADS)
Shafer, J. M.; Varljen, M. D.
1990-08-01
A fundamental requirement for geostatistical analyses of spatially correlated environmental data is the estimation of the sample semivariogram to characterize spatial correlation. Selecting an underlying theoretical semivariogram based on the sample semivariogram is an extremely important and difficult task that is subject to a great deal of uncertainty. Current standard practice does not involve consideration of the confidence associated with semivariogram estimates, largely because classical statistical theory does not provide the capability to construct confidence limits from single realizations of correlated data, and multiple realizations of environmental fields are not found in nature. The jackknife method is a nonparametric statistical technique for parameter estimation that may be used to estimate the semivariogram. When used in connection with standard confidence procedures, it allows for the calculation of closely approximate confidence limits on the semivariogram from single realizations of spatially correlated data. The accuracy and validity of this technique was verified using a Monte Carlo simulation approach which enabled confidence limits about the semivariogram estimate to be calculated from many synthetically generated realizations of a random field with a known correlation structure. The synthetically derived confidence limits were then compared to jackknife estimates from single realizations with favorable results. Finally, the methodology for applying the jackknife method to a real-world problem and an example of the utility of semivariogram confidence limits were demonstrated by constructing confidence limits on seasonal sample variograms of nitrate-nitrogen concentrations in shallow groundwater in an approximately 12-mi2 (˜30 km2) region in northern Illinois. In this application, the confidence limits on sample semivariograms from different time periods were used to evaluate the significance of temporal change in spatial correlation. This capability is quite important as it can indicate when a spatially optimized monitoring network would need to be reevaluated and thus lead to more robust monitoring strategies.
Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-01-01
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914
Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-03-15
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.
Fácio, Cássio L; Previato, Lígia F; Machado-Paula, Ligiane A; Matheus, Paulo Cs; Araújo, Edilberto
2016-12-01
This study aimed to assess and compare sperm motility, concentration, and morphology recovery rates, before and after processing through sperm washing followed by swim-up or discontinuous density gradient centrifugation in normospermic individuals. Fifty-eight semen samples were used in double intrauterine insemination procedures; 17 samples (group 1) were prepared with sperm washing followed by swim-up, and 41 (group 2) by discontinuous density gradient centrifugation. This prospective non-randomized study assessed seminal parameters before and after semen processing. A dependent t-test was used for the same technique to analyze seminal parameters before and after semen processing; an independent t-test was used to compare the results before and after processing for both techniques. The two techniques produced decreases in sample concentration (sperm washing followed by swim-up: P<0.000006; discontinuous density gradient centrifugation: P=0.008457) and increases in motility and normal morphology sperm rates after processing. The difference in sperm motility between the two techniques was not statistically significant. Sperm washing followed by swim-up had better morphology recovery rates than discontinuous density gradient centrifugation (P=0.0095); and the density gradient group had better concentration recovery rates than the swim-up group (P=0.0027). The two methods successfully recovered the minimum sperm values needed to perform intrauterine insemination. Sperm washing followed by swim-up is indicated for semen with high sperm concentration and better morphology recovery rates. Discontinuous density gradient centrifugation produced improved concentration recovery rates.
Pseudo-random number generator for the Sigma 5 computer
NASA Technical Reports Server (NTRS)
Carroll, S. N.
1983-01-01
A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
A compressed sensing X-ray camera with a multilayer architecture
Wang, Zhehui; Laroshenko, O.; Li, S.; ...
2018-01-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
A compressed sensing X-ray camera with a multilayer architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Laroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Sampling design for spatially distributed hydrogeologic and environmental processes
Christakos, G.; Olea, R.A.
1992-01-01
A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.
Using regression methods to estimate stream phosphorus loads at the Illinois River, Arkansas
Haggard, B.E.; Soerens, T.S.; Green, W.R.; Richards, R.P.
2003-01-01
The development of total maximum daily loads (TMDLs) requires evaluating existing constituent loads in streams. Accurate estimates of constituent loads are needed to calibrate watershed and reservoir models for TMDL development. The best approach to estimate constituent loads is high frequency sampling, particularly during storm events, and mass integration of constituents passing a point in a stream. Most often, resources are limited and discrete water quality samples are collected on fixed intervals and sometimes supplemented with directed sampling during storm events. When resources are limited, mass integration is not an accurate means to determine constituent loads and other load estimation techniques such as regression models are used. The objective of this work was to determine a minimum number of water-quality samples needed to provide constituent concentration data adequate to estimate constituent loads at a large stream. Twenty sets of water quality samples with and without supplemental storm samples were randomly selected at various fixed intervals from a database at the Illinois River, northwest Arkansas. The random sets were used to estimate total phosphorus (TP) loads using regression models. The regression-based annual TP loads were compared to the integrated annual TP load estimated using all the data. At a minimum, monthly sampling plus supplemental storm samples (six samples per year) was needed to produce a root mean square error of less than 15%. Water quality samples should be collected at least semi-monthly (every 15 days) in studies less than two years if seasonal time factors are to be used in the regression models. Annual TP loads estimated from independently collected discrete water quality samples further demonstrated the utility of using regression models to estimate annual TP loads in this stream system.
Application of Lamendin's adult dental aging technique to a diverse skeletal sample.
Prince, Debra A; Ubelaker, Douglas H
2002-01-01
Lamendin et al. (1) proposed a technique to estimate age at death for adults by analyzing single-rooted teeth. They expressed age as a function of two factors: translucency of the tooth root and periodontosis (gingival regression). In their study, they analyzed 306 singled rooted teeth that were extracted at autopsy from 208 individuals of known age at death, all of whom were considered as having a French ancestry. Their sample consisted of 135 males, 73 females, 198 whites, and 10 blacks. The sample ranged in age from 22 to 90 years of age. By using a simple formulae (A = 0.18 x P + 0.42 x T + 25.53, where A = Age in years, P = Periodontosis height x 100/root height, and T = Transparency height x 100/root height), Lamendin et al. were able to estimate age at death with a mean error of +/- 10 years on their working sample and +/- 8.4 years on a forensic control sample. Lamendin found this technique to work well with a French population, but did not test it outside of that sample area. This study tests the accuracy of this adult aging technique on a more diverse skeletal population, the Terry Collection housed at the Smithsonian's National Museum of Natural History. Our sample consists of 400 teeth from 94 black females, 72 white females, 98 black males, and 95 white males, ranging from 25 to 99 years. Lamendin's technique was applied to this sample to test its applicability to a population not of French origin. Providing results from a diverse skeletal population will aid in establishing the validity of this method to be used in forensic cases, its ideal purpose. Our results suggest that Lamendin's method estimates age fairly accurately outside of the French sample yielding a mean error of 8.2 years, standard deviation 6.9 years, and standard error of the mean 0.34 years. In addition, when ancestry and sex are accounted for, the mean errors are reduced for each group (black females, white females, black males, and white males). Lamendin et al. reported an inter-observer error of 9+/-1.8 and 10+/-2 sears from two independent observers. Forty teeth were randomly remeasured from the Terry Collection in order to assess an intra-observer error. From this retest, an intra-observer error of 6.5 years was detected.
MANCOVA for one way classification with homogeneity of regression coefficient vectors
NASA Astrophysics Data System (ADS)
Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.
2017-11-01
The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.
A pilot cluster randomized controlled trial of structured goal-setting following stroke.
Taylor, William J; Brown, Melanie; William, Levack; McPherson, Kathryn M; Reed, Kirk; Dean, Sarah G; Weatherall, Mark
2012-04-01
To determine the feasibility, the cluster design effect and the variance and minimal clinical importance difference in the primary outcome in a pilot study of a structured approach to goal-setting. A cluster randomized controlled trial. Inpatient rehabilitation facilities. People who were admitted to inpatient rehabilitation following stroke who had sufficient cognition to engage in structured goal-setting and complete the primary outcome measure. Structured goal elicitation using the Canadian Occupational Performance Measure. Quality of life at 12 weeks using the Schedule for Individualised Quality of Life (SEIQOL-DW), Functional Independence Measure, Short Form 36 and Patient Perception of Rehabilitation (measuring satisfaction with rehabilitation). Assessors were blinded to the intervention. Four rehabilitation services and 41 patients were randomized. We found high values of the intraclass correlation for the outcome measures (ranging from 0.03 to 0.40) and high variance of the SEIQOL-DW (SD 19.6) in relation to the minimally importance difference of 2.1, leading to impractically large sample size requirements for a cluster randomized design. A cluster randomized design is not a practical means of avoiding contamination effects in studies of inpatient rehabilitation goal-setting. Other techniques for coping with contamination effects are necessary.
Bingi, Jayachandra; Murukeshan, Vadakke Matham
2015-01-01
Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices. PMID:26679513
Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H
2017-12-19
Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.
Toward Robust and Efficient Climate Downscaling for Wind Energy
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Rife, D.; Pinto, J. O.; Monaghan, A. J.; Davis, C. A.
2011-12-01
This presentation describes a more accurate and economical (less time, money and effort) wind resource assessment technique for the renewable energy industry, that incorporates innovative statistical techniques and new global mesoscale reanalyzes. The technique judiciously selects a collection of "case days" that accurately represent the full range of wind conditions observed at a given site over a 10-year period, in order to estimate the long-term energy yield. We will demonstrate that this new technique provides a very accurate and statistically reliable estimate of the 10-year record of the wind resource by intelligently choosing a sample of ±120 case days. This means that the expense of downscaling to quantify the wind resource at a prospective wind farm can be cut by two thirds from the current industry practice of downscaling a randomly chosen 365-day sample to represent winds over a "typical" year. This new estimate of the long-term energy yield at a prospective wind farm also has far less statistical uncertainty than the current industry standard approach. This key finding has the potential to reduce significantly market barriers to both onshore and offshore wind farm development, since insurers and financiers charge prohibitive premiums on investments that are deemed to be high risk. Lower uncertainty directly translates to lower perceived risk, and therefore far more attractive financing terms could be offered to wind farm developers who employ this new technique.
Does the Use of a "Walking Bleaching" Technique Increase Bone Resorption Markers?
Bersezio, C; Vildósola, P; Sáez, M; Sánchez, F; Vernal, R; Oliveira, O B; Jorquera, G; Basualdo, J; Loguercio, A; Fernández, E
This randomized clinical trial evaluated the effect of 35% hydrogen peroxide in comparison with 37% carbamide peroxide in a nonvital bleaching technique of "walking bleaching" (four sessions of treatment) on periodontal markers: nuclear factor kappa B-ligand (RANK-L-process of root resorption marker) and interleukin 1β (IL-1β-inflammatory response marker). Fifty volunteers presenting with discoloration of nonvital teeth and endodontic treatment in good condition participated. Fifty teeth were randomly divided into two study groups according to bleaching gel: HP = 35% hydrogen peroxide (n=25) and 37% carbamide peroxide (n=25). Nonvital bleaching was performed with a walking bleaching technique consisting of four sessions of bleach application. Gingival crevicular fluid samples were taken in order to quantify the RANK-L and IL-1β levels by enzyme-linked immunosorbent assay. Samples were obtained from six periodontal sites for each bleached tooth: three vestibular and three palatine (mesial, middle, and distal) at seven time periods: baseline, after each of the four sessions of nonvital bleaching, at one week, and at one month after nonvital bleaching. Tooth color variations were analyzed in each session by VITA Bleachedguide 3D-MASTER (ΔSGU). Significant increments in the RANK-L and IL-1β levels were detected in each evaluated time compared with baseline ( p<0.05); however, no differences were detected between hydrogen peroxide and carbamide peroxide on increments of the biomarkers studied. The change of color was effective for both nonvital bleaching therapies ( p<0.05). Nonvital bleaching induced a significant increment in the RANK-L and IL-1β levels in periodontal tissues around bleached, nonvital teeth.
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
A closed-form solution to tensor voting: theory and applications.
Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard
2012-08-01
We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.
Toward in situ x-ray diffraction imaging at the nanometer scale
NASA Astrophysics Data System (ADS)
Zatsepin, Nadia A.; Dilanian, Ruben A.; Nikulin, Andrei Y.; Gable, Brian M.; Muddle, Barry C.; Sakata, Osami
2008-08-01
We present the results of preliminary investigations determining the sensitivity and applicability of a novel x-ray diffraction based nanoscale imaging technique, including simulations and experiments. The ultimate aim of this nascent technique is non-destructive, bulk-material characterization on the nanometer scale, involving three dimensional image reconstructions of embedded nanoparticles and in situ sample characterization. The approach is insensitive to x-ray coherence, making it applicable to synchrotron and laboratory hard x-ray sources, opening the possibility of unprecedented nanometer resolution with the latter. The technique is being developed with a focus on analyzing a technologically important light metal alloy, Al-xCu (where x is 2.0-5.0 %wt). The mono- and polycrystalline samples contain crystallographically oriented, weakly diffracting Al2Cu nanoprecipitates in a sparse, spatially random dispersion within the Al matrix. By employing a triple-axis diffractometer in the non-dispersive setup we collected two-dimensional reciprocal space maps of synchrotron x-rays diffracted from the Al2Cu nanoparticles. The intensity profiles of the diffraction peaks confirmed the sensitivity of the technique to the presence and orientation of the nanoparticles. This is a fundamental step towards in situ observation of such extremely sparse, weakly diffracting nanoprecipitates embedded in light metal alloys at early stages of their growth.
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
1993-03-01
statistical mathe- matics, began in the late 1800’s when Sir Francis Galton first attempted to use practical mathematical techniques to investigate the...randomly collected (sampled) many pairs of parent/child height mea- surements (data), Galton observed that for a given parent- height average, the...ty only Maximum Adjusted R2 will be discussed. However, Maximum Adjusted R’ and Minimum MSE test exactly the same 2.thing. Adjusted R is related to R
Deep sea tides determination from GEOS-3
NASA Technical Reports Server (NTRS)
Maul, G. A.; Yanaway, A.
1978-01-01
GEOS 3 altimeter data in a 5 degree X 5 degree square centered at 30 deg N, 70 deg W were analyzed to evaluate deep sea tide determination from a spacecraft. The signal to noise ratio of known tidal variability to altimeter measurement of sea level above the ellipsoid was 0.1. A sample was obtained in a 5 deg x 5 deg area approximately once every four days. The randomly spaced time series was analyzed using two independent least squares techniques.
Computer modelling of grain microstructure in three dimensions
NASA Astrophysics Data System (ADS)
Narayan, K. Lakshmi
We present a program that generates the two-dimensional micrographs of a three dimensional grain microstructure. The code utilizes a novel scanning, pixel mapping technique to secure statistical distributions of surface areas, grain sizes, aspect ratios, perimeters, number of nearest neighbors and volumes of the randomly nucleated particles. The program can be used for comparing the existing theories of grain growth, and interpretation of two-dimensional microstructure of three-dimensional samples. Special features have been included to minimize the computation time and resource requirements.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Kain, Jay; Martorello, Laura; Swanson, Edward; Sego, Sandra
2011-01-01
The purpose of the randomized clinical study was to scientifically assess which intervention increases passive range of motion most effectively: the indirect tri-planar myofascial release (MFR) technique or the application of hot packs for gleno-humeral joint flexion, extension, and abduction. A total of 31 participants from a sample of convenience were randomly assigned to examine whether or not MFR was as effective in increasing range of motion as hot packs. The sample consisted of students at American International College. Students were randomly assigned to two groups: hot pack application (N=13) or MFR technique (N=18). The independent variable was the intervention, either the tri-planar MFR technique or the hot pack application. Group one received the indirect tri-planar MFR technique once for 3min. Group two received one hot pack application for 20min. The dependent variables, passive gleno-humeral shoulder range of motion in shoulder flexion, shoulder extension, and shoulder abduction, were taken pre- and post-intervention for both groups. Data was analyzed through the use of a two-way factorial design with mixed-factors ANOVA. Prior to conducting the study, inter-rater reliability was established using three testers for goniometric measures. A 2 (type of intervention: hot packs or MFR) by 2 (pre-test or post-test) mixed-factors ANOVA was calculated. Significant increases in range of motion were found for flexion, extension and abduction when comparing pre-test scores to post-test scores. The results of the ANOVA showed that for passive range of motion no differences were found for flexion, extension and abduction between the effectiveness of hot packs and MFR. For each of the dependent variables measured, MFR was shown to be as effective as hot packs in increasing range of motion, supporting the hypothesis. Since there was no significant difference between the types of intervention, both the hot pack application and the MFR technique were found to be equally effective in increasing passive range of motion of the joint in flexion, extension, and abduction of the gleno-humeral joint. The indirect tri-planar intervention could be considered more effective as an intervention in terms of time spent with a patient and the number of patients seen in a 20-min period. No equipment is required to carry out the MFR intervention, whereby using a hot pack requires the hot pack, towels, and a hydraculator unit with the use of the indirect tri-planar intervention, a therapist could treat four to five patients in the time it would take for one standard hot pack treatment of 20min, less the hands-on intervention of the therapist. Copyright © 2009 Elsevier Ltd. All rights reserved.
How does Socio-Economic Factors Influence Interest to Go to Vocational High Schools?
NASA Astrophysics Data System (ADS)
Utomo, N. F.; Wonggo, D.
2018-02-01
This study is aimed to reveal the interest of the students of junior high schools in Sangihe Islands, Indonesia, to go to vocational high schools and the affecting factors. This study used the quantitative method with the ex-post facto approach. The population consisted of 332 students, and the sample of 178 students was established using the proportional random sampling technique applying Isaac table’s 5% error standard. The results show that family’s socio-economic condition positively contributes 26% to interest to go to vocational high schools thus proving that family’s socio-economic condition is influential and contribute to junior high school students’ interest to go to vocational high schools.
Feature Selection for Ridge Regression with Provable Guarantees.
Paul, Saurabh; Drineas, Petros
2016-04-01
We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
Banchonhattakit, Pannee; Duangsong, Rujira; Muangsom, Niramon; Kamsong, Theppamon; Phangwan, Krittiya
2015-03-01
The objective of this study was to investigate the effectiveness of brain-based learning (BBL) and animated cartoons on video compact discs (VCDs) in enhancing the healthy habits of school children. A representative sample of 1085 school children in the first through the third grades at 16 schools was selected by multistage random sampling. Knowledge of healthy habits and self-reported adoption of practices were assessed by a questionnaire. BBL and VCD, either combined or as single-intervention techniques, led to improved knowledge and practice of healthy behavior, whereas conventional teaching did not. As a single-intervention technique, BBL on its own led to a greater improvement in healthy practices than VCD, but the addition of BBL to VCD made no difference, and there was no difference between BBL and VCD in terms of improvements in knowledge. In conclusion, both BBL and VCD are effective, but VCD requires fewer resources. Recommendations are made for further research. © 2012 APJPH.
The North American Breeding Bird Survey
Bystrak, D.; Ralph, C. John; Scott, J. Michael
1981-01-01
A brief history of the North American Breeding Bird Survey (BBS) and a discussion of the technique are presented. The approximately 2000 random roadside routes conducted yearly during the breeding season throughout North America produce an enormous bank of data on distribution and abundance of breeding birds with great potential use. Data on about one million total birds of 500 species per year are on computer tape to facilitate accessibility and are available to any serious investigator. The BBS includes the advantages of wide geographic coverage, sampling of most habitat types, standardization of data collection, and a relatively simple format. The Survey is limited by placement of roads (e.g., marshes and rugged mountainous areas are not well sampled), traffic noise interference in some cases and preference of some bird species for roadside habitats. These and other problems and biases of the BBS are discussed. The uniformity of the technique allows for detecting changes in populations and for creation of maps of relative abundance. Examples of each are presented.
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Meredith-Dennis, Laura; Xu, Gege; Goonatilleke, Elisha; Lebrilla, Carlito B; Underwood, Mark A; Smilowitz, Jennifer T
2018-02-01
When human milk is unavailable, banked milk is recommended for feeding premature infants. Milk banks use processes to eliminate pathogens; however, variability among methods exists. Research aim: The aim of this study was to compare the macronutrient (protein, carbohydrate, fat, energy), immune-protective protein, and human milk oligosaccharide (HMO) content of human milk from three independent milk banks that use pasteurization (Holder vs. vat techniques) or retort sterilization. Randomly acquired human milk samples from three different milk banks ( n = 3 from each bank) were analyzed for macronutrient concentrations using a Fourier transform mid-infrared spectroscopy human milk analyzer. The concentrations of IgA, IgM, IgG, lactoferrin, lysozyme, α-lactalbumin, α antitrypsin, casein, and HMO were analyzed by mass spectrometry. The concentrations of protein and fat were significantly ( p < .05) less in the retort sterilized compared with the Holder and vat pasteurized samples, respectively. The concentrations of all immune-modulating proteins were significantly ( p < .05) less in the retort sterilized samples compared with vat and/or Holder pasteurized samples. The total HMO concentration and HMOs containing fucose, sialic acid, and nonfucosylated neutral sugars were significantly ( p < .05) less in retort sterilized compared with Holder pasteurized samples. Random milk samples that had undergone retort sterilization had significantly less immune-protective proteins and total and specific HMOs compared with samples that had undergone Holder and vat pasteurization. These data suggest that further analysis of the effect of retort sterilization on human milk components is needed prior to widespread adoption of this process.
Prevalence of Sickle Cell Trait in the Southern Suburb of Beirut, Lebanon.
El Ariss, Abdel Badih; Younes, Mohamad; Matar, Jad; Berjaoui, Zeina
2016-01-01
The objective of this study was to assess the prevalence, gender differences, and time trends of Sickle Cell Trait in the Southern Suburb of Beirut, Lebanon, as well as to highlight the importance of screening for Sickle Cell Trait carriers in this population. Another objective was to describe a new screening technique for Sickle Cell Trait carriers. This was a retrospective cohort study carried out at a private laboratory in the Southern Suburb of Beirut, Lebanon between 2002 and 2014. The sickling test was carried out for each patient using two methods: the classical "sodium metabisulfite sickling test", and the new "sickling test method" used in the private lab. As a confirmatory test, hemoglobin electrophoresis was run on a random sample of 223 cases which were found to be positive using the two sickling tests. A total of 899 cases were found to be positive for the sickle cell trait out of 184,105 subjects screened during the 12-year period, prevalence = 0.49% (95% CI: 0.46 - 0.52). Among the total sample, females were found to have higher prevalence, where no time trend over the studied period was noted. The haemoglobin electrophoresis method confirmed the results of this new sickling test technique among the random sample of the 223 cases. We found that the prevalence of sickle cell trait is lower as compared to other Arab countries, higher in females, with no significant time trend. The sickle cell test was found to be an accurate, simple and cheap test that could be easily added as a requirement for the pre-marital testing to screen for Sickle Cell Trait carriers.
Prevalence of Sickle Cell Trait in the Southern Suburb of Beirut, Lebanon
El Ariss, Abdel Badih; Younes, Mohamad; Matar, Jad; Berjaoui, Zeina
2016-01-01
Objective The objective of this study was to assess the prevalence, gender differences, and time trends of Sickle Cell Trait in the Southern Suburb of Beirut, Lebanon, as well as to highlight the importance of screening for Sickle Cell Trait carriers in this population. Another objective was to describe a new screening technique for Sickle Cell Trait carriers. Methods This was a retrospective cohort study carried out at a private laboratory in the Southern Suburb of Beirut, Lebanon between 2002 and 2014. The sickling test was carried out for each patient using two methods: the classical “sodium metabisulfite sickling test”, and the new “sickling test method” used in the private lab. As a confirmatory test, hemoglobin electrophoresis was run on a random sample of 223 cases which were found to be positive using the two sickling tests. Results A total of 899 cases were found to be positive for the sickle cell trait out of 184,105 subjects screened during the 12-year period, prevalence = 0.49% (95% CI: 0.46 – 0.52). Among the total sample, females were found to have higher prevalence, where no time trend over the studied period was noted. The haemoglobin electrophoresis method confirmed the results of this new sickling test technique among the random sample of the 223 cases. Conclusion We found that the prevalence of sickle cell trait is lower as compared to other Arab countries, higher in females, with no significant time trend. The sickle cell test was found to be an accurate, simple and cheap test that could be easily added as a requirement for the pre-marital testing to screen for Sickle Cell Trait carriers. PMID:26977274
Efficient sampling of complex network with modified random walk strategies
NASA Astrophysics Data System (ADS)
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Cartilage Restoration of the Knee: A Systematic Review and Meta-analysis of Level 1 Studies.
Mundi, Raman; Bedi, Asheesh; Chow, Linda; Crouch, Sarah; Simunovic, Nicole; Sibilsky Enselman, Elizabeth; Ayeni, Olufemi R
2016-07-01
Focal cartilage defects of the knee are a substantial cause of pain and disability in active patients. There has been an emergence of randomized controlled trials evaluating surgical techniques to manage such injuries, including marrow stimulation (MS), autologous chondrocyte implantation (ACI), and osteochondral autograft transfer (OAT). A meta-analysis was conducted to determine if any single technique provides superior clinical results at intermediate follow-up. Systematic review and meta-analysis of randomized controlled trials. The MEDLINE, EMBASE, and Cochrane Library databases were systematically searched and supplemented with manual searches of PubMed and reference lists. Eligible studies consisted exclusively of randomized controlled trials comparing MS, ACI, or OAT techniques in patients with focal cartilage defects of the knee. The primary outcome of interest was function (Lysholm score, International Knee Documentation Committee score, Knee Osteoarthritis Outcome Score) and pain at 24 months postoperatively. A meta-analysis using standardized mean differences was performed to provide a pooled estimate of effect comparing treatments. A total of 12 eligible randomized trials with a cumulative sample size of 765 patients (62% males) and a mean (±SD) lesion size of 3.9 ± 1.3 cm(2) were included in this review. There were 5 trials comparing ACI with MS, 3 comparing ACI with OAT, and 3 evaluating different generations of ACI. In a pooled analysis comparing ACI with MS, there was no difference in outcomes at 24-month follow-up for function (standardized mean difference, 0.47 [95% CI, -0.19 to 1.13]; P = .16) or pain (standardized mean difference, -0.13 [95% CI, -0.39 to 0.13]; P = .33). The comparisons of ACI to OAT or between different generations of ACI were not amenable to pooled analysis. Overall, 5 of the 6 trials concluded that there was no significant difference in functional outcomes between ACI and OAT or between generations of ACI. There is no significant difference between MS, ACI, and OAT in improving function and pain at intermediate-term follow-up. Further randomized trials with long-term outcomes are warranted. © 2015 The Author(s).
Morelli, Girolamo; Pagni, Riccardo; Mariani, Chiara; Minervini, Riccardo; Morelli, Andrea; Gori, Francesco; Ferdeghini, Ezio Maria; Paterni, Marco; Mauro, Eva; Guidi, Elisa; Armillotta, Nicola; Canale, Domenico; Vitti, Paolo; Caramella, Davide; Minervini, Andrea
2011-06-01
We evaluated the ability of the phosphodiesterase-5 inhibitor vardenafil to increase prostate microcirculation during power Doppler ultrasound. We also evaluated the results of contrast and vardenafil enhanced targeted biopsies compared to those of standard 12-core random biopsies to detect cancer. Between May 2008 and January 2010, 150 consecutive patients with prostate specific antigen more than 4 ng/ml at first diagnosis with negative digital rectal examination and transrectal ultrasound, and no clinical history of prostatitis underwent contrast enhanced power Doppler ultrasound (bolus injection of 2.4 ml SonoVue® contrast agent), followed by vardenafil enhanced power Doppler ultrasound (1 hour after oral administration of vardenafil 20 mg). All patients underwent standard 12-core transrectal ultrasound guided random prostate biopsy plus 1 further sampling from each suspected hypervascular lesion detected by contrast and vardenafil enhanced power Doppler ultrasound. Prostate cancer was detected in 44 patients (29.3%). Contrast and vardenafil enhanced power Doppler ultrasound detected suspicious, contrast enhanced and vardenafil enhanced areas in 112 (74.6%) and 110 patients (73.3%), and was diagnostic for cancer in 32 (28.5%) and 42 (38%), respectively. Analysis of standard technique, and contrast and vardenafil enhanced power Doppler ultrasound findings by biopsy core showed significantly higher detection using vardenafil vs contrast enhanced power Doppler ultrasound and standard technique (41.2% vs 22.7% and 8.1%, p <0.005 and <0.001, respectively). The detection rate of standard plus contrast or vardenafil enhanced power Doppler ultrasound was 10% and 11.7% (p not significant). Vardenafil enhanced power Doppler ultrasound enables excellent visualization of the microvasculature associated with cancer and can improve the detection rate compared to contrast enhanced power Doppler ultrasound and the random technique. Copyright © 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Kudo, Taiki; Kawakami, Hiroshi; Hayashi, Tsuyoshi; Yasuda, Ichiro; Mukai, Tsuyoshi; Inoue, Hiroyuki; Katanuma, Akio; Kawakubo, Kazumichi; Ishiwatari, Hirotoshi; Doi, Shinpei; Yamada, Reiko; Maguchi, Hiroyuki; Isayama, Hiroyuki; Mitsuhashi, Tomoko; Sakamoto, Naoya
2014-12-01
EUS-guided FNA (EUS-FNA) has a high diagnostic accuracy for pancreatic diseases. However, although most reports have typically focused on cytology, histological tissue quality has rarely been investigated. The effectiveness of EUS-FNA combined with high negative pressure (HNP) suction was recently indicated for tissue acquisition, but has not thus far been tested in a prospective, randomized clinical trial. To evaluate the adequacy of EUS-FNA with HNP for the histological diagnosis of pancreatic lesions by using 25-gauge needles. Prospective, single-blind, randomized, controlled crossover trial. Seven tertiary referral centers. Patients referred for EUS-FNA of pancreatic solid lesions. From July 2011 to April 2012, 90 patients underwent EUS-FNA of pancreatic solid masses by using normal negative pressure (NNP) and HNP with 2 respective passes. The order of the passes was randomized, and the sample adequacy, quality, and histology were evaluated by a single expert pathologist. EUS-FNA by using NNP and HNP. The adequacy of tissue acquisition and the accuracy of histological diagnoses made by using the EUS-FNA technique with HNP. We found that 72.2% (65/90) and 90% (81/90) of the specimens obtained using NNP and HNP, respectively, were adequate for histological diagnosis (P = .0003, McNemar test). For 73.3% (66/90) and 82.2% (74/90) of the specimens obtained by using NNP and HNP, respectively, an accurate diagnosis was achieved (P = .06, McNemar test). Pancreatitis developed in 1 patient after this procedure, which subsided with conservative therapy. This was a single-blinded, crossover study. Biopsy procedures that combine the EUS-FNA with HNP techniques are superior to EUS-FNA with NNP procedures for tissue acquisition. ( UMIN000005939.). Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
West Java Snack Mapping based on Snack Types, Main Ingredients, and Processing Techniques
NASA Astrophysics Data System (ADS)
Nurani, A. S.; Subekti, S.; Ana
2016-04-01
The research was motivated by lack of literature on archipelago snack especially from West Java. It aims to explore the snack types, the processing techniques, and the main ingredients by planning a learning material on archipelago cake especially from West Java. The research methods used are descriptive observations and interviews. The samples were randomly chosen from all regions in West Java. The findings show the identification of traditional snack from West java including: 1. snack types which are similar in all regions as research sample namely: opak, rangginang, nagasari, aliagrem, cuhcur, keripik, semprong, wajit, dodol, kecimpring, combro, tape ketan, and surabi. The typical snack types involve burayot (Garut), simping kaum (Purwakarta), surabi hejo (Karawang), papais cisaat (Subang), Papais moyong, opak bakar (Kuningan), opak oded, ranggesing (Sumedang), gapit, tapel (Cirebon), gulampo, kue aci (Tasikmalaya), wajit cililin, gurilem (West Bandung), and borondong (Bandung District); 2. various processing techniques namely: steaming, boiling, frying, caramelizing, baking, grilling, roaster, sugaring; 3. various main ingredients namely rice, local glutinous rice, rice flour, glutinous rice flour, starch, wheat flour, hunkue flour, cassava, sweet potato, banana, nuts, and corn; 4. snack classification in West Java namely (1) traditional snack, (2) creation-snack, (3) modification-snack, (4) outside influence-snack.
Bagheri-Nesami, Masoumeh; Shorofi, Seyed Afshin; Zargar, Nahid; Sohrabi, Maryam; Gholipour-Baradari, Afshin; Khalilian, Alireza
2014-02-01
To examine the effects of foot reflexology massage on anxiety in patients following CABG surgery. In this randomized controlled trial, 80 patients who met the inclusion criteria were conveniently sampled and randomly allocated to the experimental and control groups after they were matched on age and gender. On the days following surgery, the experimental group received foot reflexology massage on their left foot 20 min a day for 4 days, while the control group was given a gentle foot rub with oil for one minute. Anxiety was measured using the short-form of the Spielberger State-Trait Anxiety Inventory and the Visual Analogue Scale-Anxiety. Both measurement instruments confirmed a significant decrease in anxiety following the foot reflexology massage. The significant decrease in anxiety in the experimental group following the foot reflexology massage supports the use of this complementary therapy technique for the relief of anxiety. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Sampling Large Graphs for Anticipatory Analytics
2015-05-15
low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges
Cobo, Beatriz; Rueda, Mª Mar; López-Torrecillas, Francisca
2017-12-01
Cannabis is the most widely used illicit drug in developed countries, and has a significant impact on mental and physical health in the general population. Although the evaluation of levels of substance use is difficult, a method such as the randomized response technique (RRT), which includes both a personal component and an assurance of confidentiality, provides a combination which can achieve a considerable degree of accuracy. Various RRT surveys have been conducted to measure the prevalence of drug use, but to date no studies have been made of the effectiveness of this approach in surveys with respect to quantitative variables related to drug use. This paper describes a probabilistic, stratified sample of 1146 university students asking sensitive quantitative questions about cannabis use in Spanish universities, conducted using the RRT. On comparing the results of the direct question (DQ) survey and those of the randomized response (RR) survey, we find that the number of cannabis cigarettes consumed during the past year (DQ = 3, RR = 17 approximately), and the number of days when consumption took place (DQ = 1, RR = 7) are much higher with RRT. The advantages of RRT, reported previously and corroborated in our study, make it a useful method for investigating cannabis use. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2016-01-01
In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Randomized Item Response Theory Models
ERIC Educational Resources Information Center
Fox, Jean-Paul
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…
Fontenot, Brian E; Hunt, Laura R; Hildenbrand, Zacariah L; Carlton, Doug D; Oka, Hyppolite; Walton, Jayme L; Hopkins, Dan; Osorio, Alexandra; Bjorndal, Bryan; Hu, Qinhong H; Schug, Kevin A
2013-09-03
Natural gas has become a leading source of alternative energy with the advent of techniques to economically extract gas reserves from deep shale formations. Here, we present an assessment of private well water quality in aquifers overlying the Barnett Shale formation of North Texas. We evaluated samples from 100 private drinking water wells using analytical chemistry techniques. Analyses revealed that arsenic, selenium, strontium and total dissolved solids (TDS) exceeded the Environmental Protection Agency's Drinking Water Maximum Contaminant Limit (MCL) in some samples from private water wells located within 3 km of active natural gas wells. Lower levels of arsenic, selenium, strontium, and barium were detected at reference sites outside the Barnett Shale region as well as sites within the Barnett Shale region located more than 3 km from active natural gas wells. Methanol and ethanol were also detected in 29% of samples. Samples exceeding MCL levels were randomly distributed within areas of active natural gas extraction, and the spatial patterns in our data suggest that elevated constituent levels could be due to a variety of factors including mobilization of natural constituents, hydrogeochemical changes from lowering of the water table, or industrial accidents such as faulty gas well casings.
Lee, Woowon; Toussaint, Kimani C
2018-05-31
Environmental-scanning electron microscopy (ESEM) is routinely applied to various biological samples due to its ability to maintain a wet environment while imaging; moreover, the technique obviates the need for sample coating. However, there is limited research carried out on electron-beam (e-beam) induced tissue damage resulting from using the ESEM. In this paper, we use quantitative second-harmonic generation (SHG) microscopy to examine the effects of e-beam exposure from the ESEM on collagenous tissue samples prepared as either fixed, frozen, wet or dehydrated. Quantitative SHG analysis of tissues, before and after ESEM e-beam exposure in low-vacuum mode, reveals evidence of cross-linking of collagen fibers, however there are no structural differences observed in fixed tissue. Meanwhile wet-mode ESEM appears to radically alter the structure from a regular fibrous arrangement to a more random fiber orientation. We also confirm that ESEM images of collagenous tissues show higher spatial resolution compared to SHG microscopy, but the relative tradeoff with collagen specificity reduces its effectiveness in quantifying collagen fiber organization. Our work provides insight on both the limitations of the ESEM for tissue imaging, and the potential opportunity to use as a complementary technique when imaging fine features in the non-collagenous regions of tissue samples.
Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen
2009-03-01
The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
Less is more: Sampling chemical space with active learning
NASA Astrophysics Data System (ADS)
Smith, Justin S.; Nebgen, Ben; Lubbers, Nicholas; Isayev, Olexandr; Roitberg, Adrian E.
2018-06-01
The development of accurate and transferable machine learning (ML) potentials for predicting molecular energetics is a challenging task. The process of data generation to train such ML potentials is a task neither well understood nor researched in detail. In this work, we present a fully automated approach for the generation of datasets with the intent of training universal ML potentials. It is based on the concept of active learning (AL) via Query by Committee (QBC), which uses the disagreement between an ensemble of ML potentials to infer the reliability of the ensemble's prediction. QBC allows the presented AL algorithm to automatically sample regions of chemical space where the ML potential fails to accurately predict the potential energy. AL improves the overall fitness of ANAKIN-ME (ANI) deep learning potentials in rigorous test cases by mitigating human biases in deciding what new training data to use. AL also reduces the training set size to a fraction of the data required when using naive random sampling techniques. To provide validation of our AL approach, we develop the COmprehensive Machine-learning Potential (COMP6) benchmark (publicly available on GitHub) which contains a diverse set of organic molecules. Active learning-based ANI potentials outperform the original random sampled ANI-1 potential with only 10% of the data, while the final active learning-based model vastly outperforms ANI-1 on the COMP6 benchmark after training to only 25% of the data. Finally, we show that our proposed AL technique develops a universal ANI potential (ANI-1x) that provides accurate energy and force predictions on the entire COMP6 benchmark. This universal ML potential achieves a level of accuracy on par with the best ML potentials for single molecules or materials, while remaining applicable to the general class of organic molecules composed of the elements CHNO.
Pseudo-random tool paths for CNC sub-aperture polishing and other applications.
Dunn, Christina R; Walker, David D
2008-11-10
In this paper we first contrast classical and CNC polishing techniques in regard to the repetitiveness of the machine motions. We then present a pseudo-random tool path for use with CNC sub-aperture polishing techniques and report polishing results from equivalent random and raster tool-paths. The random tool-path used - the unicursal random tool-path - employs a random seed to generate a pattern which never crosses itself. Because of this property, this tool-path is directly compatible with dwell time maps for corrective polishing. The tool-path can be used to polish any continuous area of any boundary shape, including surfaces with interior perforations.
Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M
2010-12-01
A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.
2011-01-01
Background Hepatic resection is still associated with significant morbidity. Although the period of parenchymal transection presents a crucial step during the operation, uncertainty persists regarding the optimal technique of transection. It was the aim of the present randomized controlled trial to evaluate the efficacy and safety of hepatic resection using the technique of stapler hepatectomy compared to the simple clamp-crushing technique. Methods/Design The CRUNSH Trial is a prospective randomized controlled single-center trial with a two-group parallel design. Patients scheduled for elective hepatic resection without extrahepatic resection at the Department of General-, Visceral- and Transplantation Surgery, University of Heidelberg are enrolled into the trial and randomized intraoperatively to hepatic resection by the clamp-crushing technique and stapler hepatectomy, respectively. The primary endpoint is total intraoperative blood loss. A set of general and surgical variables are documented as secondary endpoints. Patients and outcome-assessors are blinded for the treatment intervention. Discussion The CRUNSH Trial is the first randomized controlled trial to evaluate efficacy and safety of stapler hepatectomy compared to the clamp-crushing technique for parenchymal transection during elective hepatic resection. Trial Registration ClinicalTrials.gov: NCT01049607 PMID:21888669
Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.
Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette
2011-10-01
To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.
NASA Astrophysics Data System (ADS)
Busthanul, N.; Lumoindong, Y.; Syafiuddin, M.; Heliawaty; Lanuhu, N.; Ibrahim, T.; Ambrosius, R. R.
2018-05-01
Farmers’ attitudes and perceptions may be the cause of ineffective implementation of conservation farming for agriculture sustainability due to vary of implementing of conservation techniques. The purpose of this research is to know the attitude and perception of farmer toward the application of conservation technique and to know correlation between farmer attitude and perception toward the application of conservation technique. The research was carried out in Kanreapia Village, Tombolo Pao District, Gowa Regency, South Sulawesi Province, Indonesia. Sampling was done by randomly with 30 farmers; using non-parametric statistics with quantitative and qualitative descriptive data analysis approach, using Likert scale. The result showed that farmer attitude and perception toward conservation technique implementation which having the highest category (appropriate) is seasonal crop rotation, while the lowest with less appropriate category is the processing of land according to the contour and the cultivation of the plants accordingly. There is a very strong relationship between farmer attitude and perception. The implications of the findings are that improvements the implementation of conservation farming techniques should be made through improved perceptions.
Hwang, Jeongeun; Kim, Miju; Kim, Seunghwan; Lee, Jinwon
2013-01-01
An effective technique of phase contrast synchrotron radiation computed tomography was established for the quantitative analysis of the microstructures in the respiratory zone of a mouse lung. Heitzman’s method was adopted for the whole-lung sample preparation, and Canny’s edge detector was used for locating the air-tissue boundaries. This technique revealed detailed morphology of the respiratory zone components, including terminal bronchioles and alveolar sacs, with sufficiently high resolution of 1.74 µm isotropic voxel size. The technique enabled visual inspection of the respiratory zone components and comprehension of their relative positions in three dimensions. To check the method’s feasibility for quantitative imaging, morphological parameters such as diameter, surface area and volume were measured and analyzed for sixteen randomly selected terminal branching units, each consisting of a terminal bronchiole and a pair of succeeding alveolar sacs. The four types of asymmetry ratios concerning alveolar sac mouth diameter, alveolar sac surface area, and alveolar sac volume are measured. This is the first ever finding of the asymmetry ratio for the terminal bronchioles and alveolar sacs, and it is noteworthy that an appreciable degree of branching asymmetry was observed among the alveolar sacs at the terminal end of the airway tree, despite the number of samples was small yet. The series of efficient techniques developed and confirmed in this study, from sample preparation to quantification, is expected to contribute to a wider and exacter application of phase contrast synchrotron radiation computed tomography to a variety of studies. PMID:23704918
Feldman, H A; McKinlay, J B; Potter, D A; Freund, K M; Burns, R B; Moskowitz, M A; Kasten, L E
1997-01-01
OBJECTIVE: To study nonmedical influences on the doctor-patient interaction. A technique using simulated patients and "real" doctors is described. DATA SOURCES: A random sample of physicians, stratified on such characteristics as demographics, specialty, or experience, and selected from commercial and professional listings. STUDY DESIGN: A medical appointment is depicted on videotape by professional actors. The patient's presenting complaint (e.g., chest pain) allows a range of valid interpretation. Several alternative versions are taped, featuring the same script with patient-actors of different age, sex, race, or other characteristics. Fractional factorial design is used to select a balanced subset of patient characteristics, reducing costs without biasing the outcome. DATA COLLECTION: Each physician is shown one version of the videotape appointment and is asked to describe how he or she would diagnose or treat such a patient. PRINCIPAL FINDINGS: Two studies using this technique have been completed to date, one involving chest pain and dyspnea and the other involving breast cancer. The factorial design provided sufficient power, despite limited sample size, to demonstrate with statistical significance various influences of the experimental and stratification variables, including the patient's gender and age and the physician's experience. Persistent recruitment produced a high response rate, minimizing selection bias and enhancing validity. CONCLUSION: These techniques permit us to determine, with a degree of control unattainable in observational studies, whether medical decisions as described by actual physicians and drawn from a demographic or professional group of interest, are influenced by a prescribed set of nonmedical factors. PMID:9240285
Chander, Vishal; Chakravarti, Soumendu; Gupta, Vikas; Nandi, Sukdeb; Singh, Mithilesh; Badasara, Surendra Kumar; Sharma, Chhavi; Mittal, Mitesh; Dandapat, S; Gupta, V K
2016-12-01
Canine parvovirus-2 antigenic variants (CPV-2a, CPV-2b and CPV-2c) ubiquitously distributed worldwide in canine population causes severe fatal gastroenteritis. Antigenic typing of CPV-2 remains a prime focus of research groups worldwide in understanding the disease epidemiology and virus evolution. The present study was thus envisioned to provide a simple sequencing independent, rapid, robust, specific, user-friendly technique for detecting and typing of presently circulating CPV-2 antigenic variants. ARMS-PCR strategy was employed using specific primers for CPV-2a, CPV-2b and CPV-2c to differentiate these antigenic types. ARMS-PCR was initially optimized with reference positive controls in two steps; where first reaction was used to differentiate CPV-2a from CPV-2b/CPV-2c. The second reaction was carried out with CPV-2c specific primers to confirm the presence of CPV-2c. Initial validation of the ARMS-PCR was carried out with 24 sequenced samples and the results were matched with the sequencing results. ARMS-PCR technique was further used to screen and type 90 suspected clinical samples. Randomly selected 15 suspected clinical samples that were typed with this technique were sequenced. The results of ARMS-PCR and the sequencing matched exactly with each other. The developed technique has a potential to become a sequencing independent method for simultaneous detection and typing of CPV-2 antigenic variants in veterinary disease diagnostic laboratories globally. Copyright © 2016 Elsevier B.V. All rights reserved.
The implementation of liquid-based cytology for lung and pleural-based diseases.
Michael, Claire W; Bedrossian, Carlos C W M
2014-01-01
First introduced for the processing of cervico-vaginal samples, liquid-based cytology (LBC) soon found application in nongynecological specimens, including bronchoscopic brushings, washings and transcutaneous and transbronchial aspiration biopsy of the lung as well as pleural effusions. This article reviews the existing literature related to these specimens along with the authors' own experience. A literature review was conducted through Ovid MEDLINE and PubMed search engines using several key words. Most of the literature is based on data collected through the use of split samples. The data confirms that the use of LBC is an acceptable, and sometimes superior, alternative to the conventional preparations (CP). LBC offers several advantages, including the ability to transport in a stable collecting media, elimination of obscuring elements, ease of screening, excellent preservation, random representative sample, and application of ancillary techniques on additional preparations. Some diagnostic pitfalls related to the introduced artifacts were reported. The utilization of LBC offers many advantages over CP and has a diagnostic accuracy that is equal to or surpasses that of CP. LBC affords a bridge to the future application of molecular and other ancillary techniques to cytology. Knowledge of the morphological artifacts is useful at the early stages of implementation.
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Barnestein-Fonseca, Pilar; Leiva-Fernández, José; Vidal-España, Francisca; García-Ruiz, Antonio; Prados-Torres, Daniel; Leiva-Fernández, Francisca
2011-02-14
Low therapeutic adherence to medication is very common. Clinical effectiveness is related to dose rate and route of administration and so poor therapeutic adherence can reduce the clinical benefit of treatment. The therapeutic adherence of patients with chronic obstructive pulmonary disease (COPD) is extremely poor according to most studies. The research about COPD adherence has mainly focussed on quantifying its effect, and few studies have researched factors that affect non-adherence. Our study will evaluate the effectiveness of a multifactor intervention to improve the therapeutic adherence of COPD patients. A randomized controlled clinical trial with 140 COPD diagnosed patients selected by a non-probabilistic method of sampling. Subjects will be randomly allocated into two groups, using the block randomization technique. Every patient in each group will be visited four times during the year of the study. Motivational aspects related to adherence (beliefs and behaviour): group and individual interviews; cognitive aspects: information about illness; skills: inhaled technique training. Reinforcement of the cognitive-emotional aspects and inhaled technique training will be carried out in all visits of the intervention group. Adherence to a prescribed treatment involves a behavioural change. Cognitive, emotional and motivational aspects influence this change and so we consider the best intervention procedure to improve adherence would be a cognitive and emotional strategy which could be applied in daily clinical practice. Our hypothesis is that the application of a multifactor intervention (COPD information, dose reminders and reinforcing audiovisual material, motivational aspects and inhalation technique training) to COPD patients taking inhaled treatment will give a 25% increase in the number of patients showing therapeutic adherence in this group compared to the control group.We will evaluate the effectiveness of this multifactor intervention on patient adherence to inhaled drugs considering that it will be right and feasible to the clinical practice context. Current Controlled Trials ISRCTN18841601.
NASA Technical Reports Server (NTRS)
Ricks, W. R.
1994-01-01
PWC is used for pair-wise comparisons in both psychometric scaling techniques and cognitive research. The cognitive tasks and processes of a human operator of automated systems are now prominent considerations when defining system requirements. Recent developments in cognitive research have emphasized the potential utility of psychometric scaling techniques, such as multidimensional scaling, for representing human knowledge and cognitive processing structures. Such techniques involve collecting measurements of stimulus-relatedness from human observers. When data are analyzed using this scaling approach, an n-dimensional representation of the stimuli is produced. This resulting representation is said to describe the subject's cognitive or perceptual view of the stimuli. PWC applies one of the many techniques commonly used to acquire the data necessary for these types of analyses: pair-wise comparisons. PWC administers the task, collects the data from the test subject, and formats the data for analysis. It therefore addresses many of the limitations of the traditional "pen-and-paper" methods. By automating the data collection process, subjects are prevented from going back to check previous responses, the possibility of erroneous data transfer is eliminated, and the burden of the administration and taking of the test is eased. By using randomization, PWC ensures that subjects see the stimuli pairs presented in random order, and that each subject sees pairs in a different random order. PWC is written in Turbo Pascal v6.0 for IBM PC compatible computers running MS-DOS. The program has also been successfully compiled with Turbo Pascal v7.0. A sample executable is provided. PWC requires 30K of RAM for execution. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. Two electronic versions of the documentation are included on the diskette: one in ASCII format and one in MS Word for Windows format. PWC was developed in 1993.
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
Belay, Abera; Solomon, W K; Bultossa, Geremew; Adgaba, Nuru; Melaku, Samuel
2015-01-15
In this study, the Harenna forest honey samples were investigated with respect to their botanical origin, granulation, colour and sensory properties. Sixteen honey samples were collected from two representative sites (Chiri, C, and Wabero, W) using random sampling techniques. Botanical origin was investigated using qualitative pollen analysis by counting 500 pollen grains using harmonised methods of melissopalynology. Granulation, colour, and sensory properties of honey were determined by visual observation, using Pfund grader, acceptability and preference tests, respectively. Honey samples were also tested for tetracycline. Honey obtained from Wabero is originated dominantly from Syzygium guineense while Chiri was multifloral. The colour of honey ranged from 34 to 85 with light amber and extra light amber colours. The honey samples were free from tetracycline residue and form coarse granules slowly. Significant variation (p>0.05) in sensory preference and acceptability tests not observed due to hive types and locations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simulation of RBS spectra with known 3D sample surface roughness
NASA Astrophysics Data System (ADS)
Malinský, Petr; Siegel, Jakub; Hnatowicz, Vladimir; Macková, Anna; Švorčík, Václav
2017-09-01
The Rutherford Backscattering Spectrometry (RBS) is a technique for elemental depth profiling with a nanometer depth resolution. Possible surface roughness of analysed samples can deteriorate the RBS spectra and makes their interpretation more difficult and ambiguous. This work describes the simulation of RBS spectra which takes into account real 3D morphology of the sample surface obtained by AFM method. The RBS spectrum is calculated as a sum of the many particular spectra obtained for randomly chosen particle trajectories over sample 3D landscape. The spectra, simulated for different ion beam incidence angles, are compared to the experimental ones measured with 2.0 MeV 4He+ ions. The main aim of this work is to obtain more definite information on how a particular surface morphology and measuring geometry affects the RBS spectra and derived elemental depth profiles. A reasonable agreement between the measured and simulated spectra was found and the results indicate that the AFM data on the sample surface can be used for the simulation of RBS spectra.
Effectiveness Of Horizontal Peer-Assisted Learning In Physical Examination Performance.
Shah, Inamullah; Mahboob, Usman; Shah, Sajida
2017-01-01
All students cannot be individually trained in physical examination skills due to faculty and time limitations. Peer-assisted learning (PAL) can solve this dilemma if it is used in undergraduate curriculum. Empirical effectiveness of horizontal peer-assisted learning model has not been reported previously. The objective of this study was to compare horizontal peer-assisted learning (PAL) with expert-assisted learning (EAL) in teaching of physical examination skills. This is a randomized controlled study (Solomon four group design) carried out at a medical school. A total of 120 undergraduate year 5 students were randomized into two groups to undergo training in four areas of physical examination. Stratified random sampling technique was used. Group 1 was trained by EAL while Group 2 by PAL. Half students from both groups were given a pre-test to assess the testing effect. Both groups were given a post-test in the form of an OSCE. Independent samples t-test and paired sample t-test were used as tests of significance. Group 2 scored significantly higher than Group 1. There was significant difference (p=.000) in mean post-test scores of Group-1 (69.98±5.6) and Group-2 (85.27±5.6). Difference in mean scores was not significant (p=.977) between students who had taken the pre-test and those who had not. This study has implications in curriculum development as it provides quantitative evidence indicating that horizontal PAL as a learning strategy can actually replace, rather than augment, expert-assisted learning in teaching clinical skills to undergraduate students.
Non-randomized response model for sensitive survey with noncompliance.
Wu, Qin; Tang, Man-Lai
2016-12-01
Collecting representative data on sensitive issues has long been problematic and challenging in public health prevalence investigation (e.g. non-suicidal self-injury), medical research (e.g. drug habits), social issue studies (e.g. history of child abuse), and their interdisciplinary studies (e.g. premarital sexual intercourse). Alternative data collection techniques that can be adopted to study sensitive questions validly become more important and necessary. As an alternative to the famous Warner randomized response model, non-randomized response triangular model has recently been developed to encourage participants to provide truthful responses in surveys involving sensitive questions. Unfortunately, both randomized and non-randomized response models could underestimate the proportion of subjects with the sensitive characteristic as some respondents do not believe that these techniques can protect their anonymity. As a result, some authors hypothesized that lack of trust and noncompliance should be highest among those who have the most to lose and the least to use for the anonymity provided by using these techniques. Some researchers noticed the existence of noncompliance and proposed new models to measure noncompliance in order to get reliable information. However, all proposed methods were based on randomized response models which require randomizing devices, restrict the survey to only face-to-face interview and are lack of reproductivity. Taking the noncompliance into consideration, we introduce new non-randomized response techniques in which no covariate is required. Asymptotic properties of the proposed estimates for sensitive characteristic as well as noncompliance probabilities are developed. Our proposed techniques are empirically shown to yield accurate estimates for both sensitive and noncompliance probabilities. A real example about premarital sex among university students is used to demonstrate our methodologies. © The Author(s) 2014.
Midstream clean-catch urine collection in newborns: a randomized controlled study.
Altuntas, Nilgun; Tayfur, Asli Celebi; Kocak, Mesut; Razi, Hasan Cem; Akkurt, Serpil
2015-05-01
We aimed to evaluate a recently defined technique based on bladder stimulation and paravertebral lumbar massage maneuvers in collecting a midstream clean-catch urine sample in newborns. A total of 127 term newborns were randomly assigned either to the experimental group or the control group. Twenty-five minutes after feeding, the genital and perineal areas of the babies were cleaned. The babies were held under the armpits with legs dangling. Bladder stimulation and lumbar paravertebral massage maneuvers were only applied to the babies in the experimental group. Success was defined as collection of a urine sample within 5 min of starting the stimulation maneuvers in the experimental group and of holding under the armpits in the control group. The success rate of urine collection was significantly higher in the experimental group (78%) than in the control group (33%; p < 0.001). The median time (interquartile range) for sample collection was 60 s (64.5 s) in the experimental group and 300 s (95 s) in the control group (p < 0.0001). Contamination rates were similar in both groups (p = 0.770). We suggest that bladder stimulation and lumbar paravertebral massage is a safe, quick, and effective way of collecting midstream clean-catch urine in newborns.
Taggart, S. James; Andrews, A.G.; Mondragon, Jennifer; Mathews, E.A.
2005-01-01
We present evidence that Pacific sleeper sharks Somniosus pacificus co-occur with harbor seals Phoca vitulina in Glacier Bay, Alaska, and that these sharks scavenge or prey on marine mammals. In 2002, 415 stations were fished throughout Glacier Bay on a systematic sampling grid. Pacific sleeper sharks were caught at 3 of the 415 stations, and at one station a Pacific halibut Hippoglossus stenolepis was caught with a fresh bite, identified as the bite of a sleeper shark. All 3 sharks and the shark-bitten halibut were caught at stations near the mouth of Johns Hopkins Inlet, a glacial fjord with the highest concentration of seals in Glacier Bay. Using a bootstrap technique, we estimated the probability of sampling the sharks (and the shark-bitten halibut) in the vicinity of Johns Hopkins Inlet. If sharks were randomly distributed in Glacier Bay, the probability of sampling all 4 pots at the mouth of Johns Hopkins Inlet was very low (P = 0.00002). The highly non-random distribution of the sleeper sharks located near the largest harbor seal pupping and breeding colony in Glacier Bay suggests that these 2 species co-occur and may interact ecologically in or near Johns Hopkins Inlet.
Censoring approach to the detection limits in X-ray fluorescence analysis
NASA Astrophysics Data System (ADS)
Pajek, M.; Kubala-Kukuś, A.
2004-10-01
We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called "nondetects", can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples.
Annealing of Co-Cr dental alloy: effects on nanostructure and Rockwell hardness
Soylu, Elif Hilal; İde, Semra; Kılıç, Selim; Sipahi, Cumhur; Pişkin, Bulent; Gökçe, Hasan Suat
2013-01-01
PURPOSE The aim of the study was to evaluate the effect of annealing on the nanostructure and hardness of Co-Cr metal ceramic samples that were fabricated with a direct metal laser sintering (DMLS) technique. MATERIALS AND METHODS Five groups of Co-Cr dental alloy samples were manufactured in a rectangular form measuring 4 × 2 × 2 mm. Samples fabricated by a conventional casting technique (Group I) and prefabricated milling blanks (Group II) were examined as conventional technique groups. The DMLS samples were randomly divided into three groups as not annealed (Group III), annealed in argon atmosphere (Group IV), or annealed in oxygen atmosphere (Group V). The nanostructure was examined with the small-angle X-ray scattering method. The Rockwell hardness test was used to measure the hardness changes in each group, and the means and standard deviations were statistically analyzed by one-way ANOVA for comparison of continuous variables and Tukey's HSD test was used for post hoc analysis. P values of <.05 were accepted as statistically significant. RESULTS The general nanostructures of the samples were composed of small spherical entities stacked atop one another in dendritic form. All groups also displayed different hardness values depending on the manufacturing technique. The annealing procedure and environment directly affected both the nanostructure and hardness of the Co-Cr alloy. Group III exhibited a non-homogeneous structure and increased hardness (48.16 ± 3.02 HRC) because the annealing process was incomplete and the inner stress was not relieved. Annealing in argon atmosphere of Group IV not only relieved the inner stresses but also decreased the hardness (27.40 ± 3.98 HRC). The results of fitting function presented that Group IV was the most homogeneous product as the minimum bilayer thickness was measured (7.11 Å). CONCLUSION After the manufacturing with DMLS technique, annealing in argon atmosphere is an essential process for Co-Cr metal ceramic substructures. The dentists should be familiar with the materials that are used in clinic for prosthodontics treatments. PMID:24353888
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review
Miao, Yinglong; McCammon, J. Andrew
2016-01-01
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.
Miao, Yinglong; McCammon, J Andrew
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.
Endoscopic ultrasound comes of age: Mature, established, creative and here to stay!
Bhutani, Manoop S.
2014-01-01
Research in endoscopic ultrasound (EUS) is alive and kicking! This paper will present recent interesting developments in EUS based on research presented at the Digestive Disease Week (DDW) held in Chicago in 2014. Endosonographers are looking at various techniques to improve yield of fine needle aspiration and core biopsies, assess circulating tumor cells, apply EUS for personalized medicine and develop devices to ensure the adequacy of sampling. EUS may open new vistas in understanding of neurogastroenterology and gastrointestinal motility disorders as discussed in this paper. EUS guided drainage of pancreatic fluid collections, bile duct and gallbladder is feasible, and many randomized trials are being done to compare different techniques. EUS guided delivery of fiducials, drugs, coils or chemo loaded beads in possible. EUS has come off age, has matured and is here to stay! The DDW in 2014 in Chicago was a very active year for EUS. There were numerous papers on different aspects of EUS, some perfecting and improving old techniques, others dealing with randomized trials and many with novel concepts. In this paper, I will highlight some of the papers that were presented. It is not possible to discuss all the abstracts in detail. I have, therefore, chosen selected papers in different aspects of EUS to give the readers a flavor of the kind of research that was presented at DDW. PMID:25184120
Revisiting sample size: are big trials the answer?
Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J
2012-07-18
The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.
Comparison of Methods for Estimating Low Flow Characteristics of Streams
Tasker, Gary D.
1987-01-01
Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.
State estimator for multisensor systems with irregular sampling and time-varying delays
NASA Astrophysics Data System (ADS)
Peñarrocha, I.; Sanchis, R.; Romero, J. A.
2012-08-01
This article addresses the state estimation in linear time-varying systems with several sensors with different availability, randomly sampled in time and whose measurements have a time-varying delay. The approach is based on a modification of the Kalman filter with the negative-time measurement update strategy, avoiding running back the full standard Kalman filter, the use of full augmented order models or the use of reorganisation techniques, leading to a lower implementation cost algorithm. The update equations are run every time a new measurement is available, independently of the time when it was taken. The approach is useful for networked control systems, systems with long delays and scarce measurements and for out-of-sequence measurements.
Teaching materials of algebraic equation
NASA Astrophysics Data System (ADS)
Widodo, S. A.; Prahmana, R. C. I.; Purnami, A. S.; Turmudi
2017-12-01
The purpose of this paper is to know the effectiveness of teaching materials algebraic equation. This type of research used experimental method. The population in this study is all students of mathematics education who take numerical method in sarjanawiyata tamansiswa of university; the sample is taken using cluster random sampling. Instrument used in this research is test and questionnaire. The test is used to know the problem solving ability and achievement, while the questionnaire is used to know the student's response on the teaching materials. Data Analysis technique of quantitative used Wilcoxon test, while the qualitative data used grounded theory. Based on the results of the test can be concluded that the development of teaching materials can improve the ability to solve problems and achievement.
Progressive Stochastic Reconstruction Technique (PSRT) for cryo electron tomography.
Turoňová, Beata; Marsalek, Lukas; Davidovič, Tomáš; Slusallek, Philipp
2015-03-01
Cryo Electron Tomography (cryoET) plays an essential role in Structural Biology, as it is the only technique that allows to study the structure of large macromolecular complexes in their close to native environment in situ. The reconstruction methods currently in use, such as Weighted Back Projection (WBP) or Simultaneous Iterative Reconstruction Technique (SIRT), deliver noisy and low-contrast reconstructions, which complicates the application of high-resolution protocols, such as Subtomogram Averaging (SA). We propose a Progressive Stochastic Reconstruction Technique (PSRT) - a novel iterative approach to tomographic reconstruction in cryoET based on Monte Carlo random walks guided by Metropolis-Hastings sampling strategy. We design a progressive reconstruction scheme to suit the conditions present in cryoET and apply it successfully to reconstructions of macromolecular complexes from both synthetic and experimental datasets. We show how to integrate PSRT into SA, where it provides an elegant solution to the region-of-interest problem and delivers high-contrast reconstructions that significantly improve template-based localization without any loss of high-resolution structural information. Furthermore, the locality of SA is exploited to design an importance sampling scheme which significantly speeds up the otherwise slow Monte Carlo approach. Finally, we design a new memory efficient solution for the specimen-level interior problem of cryoET, removing all associated artifacts. Copyright © 2015 Elsevier Inc. All rights reserved.
van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W
2014-12-22
Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.
Grover, Sandeep; Del Greco M, Fabiola; Stein, Catherine M; Ziegler, Andreas
2017-01-01
Confounding and reverse causality have prevented us from drawing meaningful clinical interpretation even in well-powered observational studies. Confounding may be attributed to our inability to randomize the exposure variable in observational studies. Mendelian randomization (MR) is one approach to overcome confounding. It utilizes one or more genetic polymorphisms as a proxy for the exposure variable of interest. Polymorphisms are randomly distributed in a population, they are static throughout an individual's lifetime, and may thus help in inferring directionality in exposure-outcome associations. Genome-wide association studies (GWAS) or meta-analyses of GWAS are characterized by large sample sizes and the availability of many single nucleotide polymorphisms (SNPs), making GWAS-based MR an attractive approach. GWAS-based MR comes with specific challenges, including multiple causality. Despite shortcomings, it still remains one of the most powerful techniques for inferring causality.With MR still an evolving concept with complex statistical challenges, the literature is relatively scarce in terms of providing working examples incorporating real datasets. In this chapter, we provide a step-by-step guide for causal inference based on the principles of MR with a real dataset using both individual and summary data from unrelated individuals. We suggest best possible practices and give recommendations based on the current literature.
NASA Astrophysics Data System (ADS)
Tlhoaele, Malefyane; Suhre, Cor; Hofman, Adriaan
2016-05-01
Cooperative learning may improve students' motivation, understanding of course concepts, and academic performance. This study therefore enhanced a cooperative, group-project learning technique with technology resources to determine whether doing so improved students' deep learning and performance. A sample of 118 engineering students, randomly divided into two groups, participated in this study and provided data through questionnaires issued before and after the experiment. The results, obtained through analyses of variance and structural equation modelling, reveal that technology-enhanced, cooperative, group-project learning improves students' comprehension and academic performance.
Health status of hotel workers with special reference to high risk practices and STDs.
Pawar, A T; Kakrani, V A
2007-01-01
A cross sectional study was conducted on health status of hotel workers of Pune city. Out of estimated 1000 hotel workers 516 were selected by stratified random sampling technique. The study revealed that 71.5% hotel workers were suffering from one or other type of morbid condition. Anemia was the commonest morbidity with prevalence of 40.3%. 187 (36.2%) of hotel workers had extramarital sexual relations. A total of 77 (14.9%) hotel workers were having STDs at the time of study.
[Evaluation of Medical Instruments Cleaning Effect of Fluorescence Detection Technique].
Sheng, Nan; Shen, Yue; Li, Zhen; Li, Huijuan; Zhou, Chaoqun
2016-01-01
To compare the cleaning effect of automatic cleaning machine and manual cleaning on coupling type surgical instruments. A total of 32 cleaned medical instruments were randomly sampled from medical institutions in Putuo District medical institutions disinfection supply center. Hygiena System SUREII ATP was used to monitor the ATP value, and the cleaning effect was evaluated. The surface ATP values of the medical instrument of manual cleaning were higher than that of the automatic cleaning machine. Coupling type surgical instruments has better cleaning effect of automatic cleaning machine before disinfection, the application is recommended.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, Seong-Moon, E-mail: castme@kims.re.kr; Jeong, Hi-Won; Ahn, Young-Keun
Quantitative microsegregation analyses were systematically carried out during the solidification of the Ni-base superalloy CMSX-10 to clarify the methodological effect on the quantification of microsegregation and to fully understand the solidification microstructure. Three experimental techniques, namely, mushy zone quenching (MZQ), planar directional solidification followed by quenching (PDSQ), and random sampling (RS), were implemented for the analysis of microsegregation tendency and the magnitude of solute elements by electron probe microanalysis. The microprobe data and the calculation results of the diffusion field ahead of the solid/liquid (S/L) interface of PDSQ samples revealed that the liquid composition at the S/L interface is significantlymore » influenced by quenching. By applying the PDSQ technique, it was also found that the partition coefficients of all solute elements do not change appreciably during the solidification of primary γ. All three techniques could reasonably predict the segregation behavior of most solute elements. Nevertheless, the RS approach has a tendency to overestimate the magnitude of segregation for most solute elements when compared to the MZQ and PDSQ techniques. Moreover, the segregation direction of Cr and Mo predicted by the RS approach was found to be opposite from the results obtained by the MZQ and PDSQ techniques. This conflicting segregation behavior of Cr and Mo was discussed intensively. It was shown that the formation of Cr-rich areas near the γ/γ′ eutectic in various Ni-base superalloys, including the CMSX-10 alloy, could be successfully explained by the results of microprobe analysis performed on a sample quenched during the planar directional solidification of γ/γ′ eutectic. - Highlights: • Methodological effect on the quantification of microsegregation was clarified. • The liquid composition at the S/L interface was influenced by quenching. • The segregation direction of Cr varied depending on the experimental techniques. • Cr and Mo segregation in Ni-base superalloys was fully understood.« less
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.
NASA Astrophysics Data System (ADS)
Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.
2015-03-01
Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.
NASA Astrophysics Data System (ADS)
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
Gibson, Scott M; Ficklin, Stephen P; Isaacson, Sven; Luo, Feng; Feltus, Frank A; Smith, Melissa C
2013-01-01
The study of gene relationships and their effect on biological function and phenotype is a focal point in systems biology. Gene co-expression networks built using microarray expression profiles are one technique for discovering and interpreting gene relationships. A knowledge-independent thresholding technique, such as Random Matrix Theory (RMT), is useful for identifying meaningful relationships. Highly connected genes in the thresholded network are then grouped into modules that provide insight into their collective functionality. While it has been shown that co-expression networks are biologically relevant, it has not been determined to what extent any given network is functionally robust given perturbations in the input sample set. For such a test, hundreds of networks are needed and hence a tool to rapidly construct these networks. To examine functional robustness of networks with varying input, we enhanced an existing RMT implementation for improved scalability and tested functional robustness of human (Homo sapiens), rice (Oryza sativa) and budding yeast (Saccharomyces cerevisiae). We demonstrate dramatic decrease in network construction time and computational requirements and show that despite some variation in global properties between networks, functional similarity remains high. Moreover, the biological function captured by co-expression networks thresholded by RMT is highly robust.
Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Horowitz, Arthur J.; Clarke, Robin T.; Merten, Gustavo Henrique
2015-01-01
Since the 1970s, there has been both continuing and growing interest in developing accurate estimates of the annual fluvial transport (fluxes and loads) of suspended sediment and sediment-associated chemical constituents. This study provides an evaluation of the effects of manual sample numbers (from 4 to 12 year−1) and sample scheduling (random-based, calendar-based and hydrology-based) on the precision, bias and accuracy of annual suspended sediment flux estimates. The evaluation is based on data from selected US Geological Survey daily suspended sediment stations in the USA and covers basins ranging in area from just over 900 km2 to nearly 2 million km2 and annual suspended sediment fluxes ranging from about 4 Kt year−1 to about 200 Mt year−1. The results appear to indicate that there is a scale effect for random-based and calendar-based sampling schemes, with larger sample numbers required as basin size decreases. All the sampling schemes evaluated display some level of positive (overestimates) or negative (underestimates) bias. The study further indicates that hydrology-based sampling schemes are likely to generate the most accurate annual suspended sediment flux estimates with the fewest number of samples, regardless of basin size. This type of scheme seems most appropriate when the determination of suspended sediment concentrations, sediment-associated chemical concentrations, annual suspended sediment and annual suspended sediment-associated chemical fluxes only represent a few of the parameters of interest in multidisciplinary, multiparameter monitoring programmes. The results are just as applicable to the calibration of autosamplers/suspended sediment surrogates currently used to measure/estimate suspended sediment concentrations and ultimately, annual suspended sediment fluxes, because manual samples are required to adjust the sample data/measurements generated by these techniques so that they provide depth-integrated and cross-sectionally representative data.
Thomas, Mary Laudon; Elliott, Janette E; Rao, Stephen M; Fahey, Kathleen F; Paul, Steven M; Miaskowski, Christine
2012-01-01
To test the effectiveness of two interventions compared to usual care in decreasing attitudinal barriers to cancer pain management, decreasing pain intensity, and improving functional status and quality of life (QOL). Randomized clinical trial. Six outpatient oncology clinics (three Veterans Affairs [VA] facilities, one county hospital, and one community-based practice in California, and one VA clinic in New Jersey)Sample: 318 adults with various types of cancer-related pain. Patients were randomly assigned to one of three groups: control, standardized education, or coaching. Patients in the education and coaching groups viewed a video and received a pamphlet on managing cancer pain. In addition, patients in the coaching group participated in four telephone sessions with an advanced practice nurse interventionist using motivational interviewing techniques to decrease attitudinal barriers to cancer pain management. Questionnaires were completed at baseline and six weeks after the final telephone calls. Analysis of covariance was used to evaluate for differences in study outcomes among the three groups. Pain intensity, pain relief, pain interference, attitudinal barriers, functional status, and QOL. Attitudinal barrier scores did not change over time among groups. Patients randomized to the coaching group reported significant improvement in their ratings of pain-related interference with function, as well as general health, vitality, and mental health. Although additional evaluation is needed, coaching may be a useful strategy to help patients decrease attitudinal barriers toward cancer pain management and to better manage their cancer pain. By using motivational interviewing techniques, advanced practice oncology nurses can help patients develop an appropriate plan of care to decrease pain and other symptoms.
Rochefort, Christian M; Buckeridge, David L; Tanguay, Andréanne; Biron, Alain; D'Aragon, Frédérick; Wang, Shengrui; Gallix, Benoit; Valiquette, Louis; Audet, Li-Anne; Lee, Todd C; Jayaraman, Dev; Petrucci, Bruno; Lefebvre, Patricia
2017-02-16
Adverse events (AEs) in acute care hospitals are frequent and associated with significant morbidity, mortality, and costs. Measuring AEs is necessary for quality improvement and benchmarking purposes, but current detection methods lack in accuracy, efficiency, and generalizability. The growing availability of electronic health records (EHR) and the development of natural language processing techniques for encoding narrative data offer an opportunity to develop potentially better methods. The purpose of this study is to determine the accuracy and generalizability of using automated methods for detecting three high-incidence and high-impact AEs from EHR data: a) hospital-acquired pneumonia, b) ventilator-associated event and, c) central line-associated bloodstream infection. This validation study will be conducted among medical, surgical and ICU patients admitted between 2013 and 2016 to the Centre hospitalier universitaire de Sherbrooke (CHUS) and the McGill University Health Centre (MUHC), which has both French and English sites. A random 60% sample of CHUS patients will be used for model development purposes (cohort 1, development set). Using a random sample of these patients, a reference standard assessment of their medical chart will be performed. Multivariate logistic regression and the area under the curve (AUC) will be employed to iteratively develop and optimize three automated AE detection models (i.e., one per AE of interest) using EHR data from the CHUS. These models will then be validated on a random sample of the remaining 40% of CHUS patients (cohort 1, internal validation set) using chart review to assess accuracy. The most accurate models developed and validated at the CHUS will then be applied to EHR data from a random sample of patients admitted to the MUHC French site (cohort 2) and English site (cohort 3)-a critical requirement given the use of narrative data -, and accuracy will be assessed using chart review. Generalizability will be determined by comparing AUCs from cohorts 2 and 3 to those from cohort 1. This study will likely produce more accurate and efficient measures of AEs. These measures could be used to assess the incidence rates of AEs, evaluate the success of preventive interventions, or benchmark performance across hospitals.
Sampling methods to the statistical control of the production of blood components.
Pereira, Paulo; Seghatchian, Jerard; Caldeira, Beatriz; Santos, Paula; Castro, Rosa; Fernandes, Teresa; Xavier, Sandra; de Sousa, Gracinda; de Almeida E Sousa, João Paulo
2017-12-01
The control of blood components specifications is a requirement generalized in Europe by the European Commission Directives and in the US by the AABB standards. The use of a statistical process control methodology is recommended in the related literature, including the EDQM guideline. The control reliability is dependent of the sampling. However, a correct sampling methodology seems not to be systematically applied. Commonly, the sampling is intended to comply uniquely with the 1% specification to the produced blood components. Nevertheless, on a purely statistical viewpoint, this model could be argued not to be related to a consistent sampling technique. This could be a severe limitation to detect abnormal patterns and to assure that the production has a non-significant probability of producing nonconforming components. This article discusses what is happening in blood establishments. Three statistical methodologies are proposed: simple random sampling, sampling based on the proportion of a finite population, and sampling based on the inspection level. The empirical results demonstrate that these models are practicable in blood establishments contributing to the robustness of sampling and related statistical process control decisions for the purpose they are suggested for. Copyright © 2017 Elsevier Ltd. All rights reserved.
Metadynamics for training neural network model chemistries: A competitive assessment
NASA Astrophysics Data System (ADS)
Herr, John E.; Yao, Kun; McIntyre, Ryker; Toth, David W.; Parkhill, John
2018-06-01
Neural network model chemistries (NNMCs) promise to facilitate the accurate exploration of chemical space and simulation of large reactive systems. One important path to improving these models is to add layers of physical detail, especially long-range forces. At short range, however, these models are data driven and data limited. Little is systematically known about how data should be sampled, and "test data" chosen randomly from some sampling techniques can provide poor information about generality. If the sampling method is narrow, "test error" can appear encouragingly tiny while the model fails catastrophically elsewhere. In this manuscript, we competitively evaluate two common sampling methods: molecular dynamics (MD), normal-mode sampling, and one uncommon alternative, Metadynamics (MetaMD), for preparing training geometries. We show that MD is an inefficient sampling method in the sense that additional samples do not improve generality. We also show that MetaMD is easily implemented in any NNMC software package with cost that scales linearly with the number of atoms in a sample molecule. MetaMD is a black-box way to ensure samples always reach out to new regions of chemical space, while remaining relevant to chemistry near kbT. It is a cheap tool to address the issue of generalization.
Moving Target Techniques: Cyber Resilience throught Randomization, Diversity, and Dynamism
2017-03-03
Moving Target Techniques: Cyber Resilience through Randomization, Diversity, and Dynamism Hamed Okhravi and Howard Shrobe Overview: The static...nature of computer systems makes them vulnerable to cyber attacks. Consider a situation where an attacker wants to compromise a remote system running... cyber resilience that attempts to rebalance the cyber landscape is known as cyber moving target (MT) (or just moving target) techniques. Moving target
Emission Properties from ZnO Quantum Dots Dispersed in SiO2 Matrix
NASA Astrophysics Data System (ADS)
Panigrahi, Shrabani; Basak, Durga
2011-07-01
Dispersion of ZnO quantum dots in SiO2 matrix has been achieved in two techniques based on StÖber method to form ZnO QDs-SiO2 nanocomposites. Sample A is formed with random dispersion by adding tetraethyl orthosilicate (TEOS) to an ethanolic solution of ZnO nanoparticles and sample B is formed with a chain-like ordered dispersion by adding ZnO nanoparticles to an already hydrolyzed ethanolic TEOS solution. The photoluminescence spectra of the as-grown nanocomposites show strong emission in the ultraviolet region. When annealed at higher temperature, depending on the sample type, these show strong red or white emission. Interestingly, when the excitation is removed, the orderly dispersed ZnO QDs-SiO2 composite shows a very bright blue fluorescence visible by naked eyes for few seconds indicating their promise for display applications.
NASA Technical Reports Server (NTRS)
Aldrich, R. C.; Dana, R. W.; Roberts, E. H. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A stratified random sample using LANDSAT band 5 and 7 panchromatic prints resulted in estimates of water in counties with sampling errors less than + or - 9% (67% probability level). A forest inventory using a four band LANDSAT color composite resulted in estimates of forest area by counties that were within + or - 6.7% and + or - 3.7% respectively (67% probability level). Estimates of forest area for counties by computer assisted techniques were within + or - 21% of operational forest survey figures and for all counties the difference was only one percent. Correlations of airborne terrain reflectance measurements with LANDSAT radiance verified a linear atmospheric model with an additive (path radiance) term and multiplicative (transmittance) term. Coefficients of determination for 28 of the 32 modeling attempts, not adverseley affected by rain shower occurring between the times of LANDSAT passage and aircraft overflights, exceeded 0.83.
Lam, Tommy Tsan-Yuk; Ip, Hon S.; Ghedin, Elodie; Wentworth, David E.; Halpin, Rebecca A.; Stockwell, Timothy B.; Spiro, David J.; Dusek, Robert J.; Bortner, James B.; Hoskins, Jenny; Bales, Bradley D.; Yparraguirre, Dan R.; Holmes, Edward C.
2012-01-01
Despite the importance of migratory birds in the ecology and evolution of avian influenza virus (AIV), there is a lack of information on the patterns of AIV spread at the intra-continental scale. We applied a variety of statistical phylogeographic techniques to a plethora of viral genome sequence data to determine the strength, pattern and determinants of gene flow in AIV sampled from wild birds in North America. These analyses revealed a clear isolation-by-distance of AIV among sampling localities. In addition, we show that phylogeographic models incorporating information on the avian flyway of sampling proved a better fit to the observed sequence data than those specifying homogeneous or random rates of gene flow among localities. In sum, these data strongly suggest that the intra-continental spread of AIV by migratory birds is subject to major ecological barriers, including spatial distance and avian flyway.
Weatherred, Jane Long
2017-01-01
The way in which the news media frame child sexual abuse can influence public perception. This content analysis of the child sexual abuse coverage of eight national news organizations in the United States from 2002 to 2012 includes the two dominant events of the Catholic Church and Pennsylvania State University child sexual abuse scandals. Census and systematic stratified sampling techniques were applied to articles obtained from the Lexis/Nexis Academic database, resulting in a sample of 503 articles. Intercoder reliability was ensured by double coding a randomly selected sample. Study findings indicate a shift in the attribution of responsibility of child sexual abuse among news organizations over the past decade from an individual-level problem with individual-level solutions to a societal-level problem with institutional culpability. Nevertheless, individual-level solutions continue to be framed as the best possible solution.
Wang, Chunxiao; García-Fernández, David; Mas, Albert; Esteve-Zarzoso, Braulio
2015-01-01
The diversity of fungi in grape must and during wine fermentation was investigated in this study by culture-dependent and culture-independent techniques. Carignan and Grenache grapes were harvested from three vineyards in the Priorat region (Spain) in 2012, and nine samples were selected from the grape must after crushing and during wine fermentation. From culture-dependent techniques, 362 isolates were randomly selected and identified by 5.8S-ITS-RFLP and 26S-D1/D2 sequencing. Meanwhile, genomic DNA was extracted directly from the nine samples and analyzed by qPCR, DGGE and massive sequencing. The results indicated that grape must after crushing harbored a high species richness of fungi with Aspergillus tubingensis, Aureobasidium pullulans, or Starmerella bacillaris as the dominant species. As fermentation proceeded, the species richness decreased, and yeasts such as Hanseniaspora uvarum, Starmerella bacillaris and Saccharomyces cerevisiae successively occupied the must samples. The “terroir” characteristics of the fungus population are more related to the location of the vineyard than to grape variety. Sulfur dioxide treatment caused a low effect on yeast diversity by similarity analysis. Because of the existence of large population of fungi on grape berries, massive sequencing was more appropriate to understand the fungal community in grape must after crushing than the other techniques used in this study. Suitable target sequences and databases were necessary for accurate evaluation of the community and the identification of species by the 454 pyrosequencing of amplicons. PMID:26557110
Shah, R; Worner, S P; Chapman, R B
2012-10-01
Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
NASA Astrophysics Data System (ADS)
Pickler, A.; Mota, C. L.; Mantuano, A.; Salata, C.; Nogueira, L. P.; Almeida, A. P.; Alessio, R.; Sena, G.; Braz, D.; de Almeida, C. E. V.; Barroso, R. C.
2015-11-01
Recently some developments in a large number of investigative techniques have been made with the objective to obtain a micrometer spatial resolution imaging of elemental concentrations. The X-ray microfluorescence analysis (μXRF) is one of those techniques which is based on the localized excitation of a small area on the surface of sample, providing information of all elements contained in the material under study. Breast cancer is the most common malignancy in Brazilian women. The main treatment strategies for the breast cancer are surgery and chemotherapy. As bone loss is one of the possible chemotherapy side effects, in this work was used μXRF technique on femoral head samples of female Wistar rats to evaluate Ca, Fe and Zn concentrations in order to investigate possible elemental changes in bone caused by the chemotherapy. Fifteen female rats were divided randomly in groups (five rats each). G1 group received doses of doxorubicin/cyclophosphamide drugs and G2 group was treated with docetaxel/cyclophosphamide drugs. μXRF measurements were carried out at the X-ray XRF beamline in the Brazilian Synchrotron Light Laboratory. The results showed significant decrease especially in Ca concentrations when comparing the treated groups with the control group.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation
D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen
1980-01-01
Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Factors associated with social interaction anxiety among Chinese adolescents.
Peng, Z W; Lam, L T; Jin, J
2011-12-01
To investigate potential risk factors for social anxiety, particularly social interaction anxiety among the Chinese adolescents. A cross-sectional health survey was conducted in Guangzhou city of the Guangdong Province where high school students aged 13 to 18 years were recruited. The sample was selected from all high schools in the city using a 2-stage random cluster sampling technique. Social interaction anxiety was assessed using the Social Interaction Anxiety Scale. Information collected in the survey included: demographics, self-perception on school performance, relationship with teachers and peers, satisfaction with self-image, achievements, and parenting style of the mother. The parent-child relationship, specifically the relationship between respondents and their mothers, was assessed using the mother attachment subscale of the Inventory of Parent and Peer Attachment. Self-esteem was assessed using the Rosenberg Self-Esteem Scale. The multiple linear regression technique was applied to investigate associations between selected potential risk factors and social interaction anxiety, with adjustments for cluster sampling. Lower family income, lower self-esteem, and hostility were significantly associated with social interaction anxiety among adolescents. Variables identified as risk factors of anxiety disorder in the literature, such as gender, were not associated with social interaction anxiety in this sample. These results were consistent with those of other studies conducted mainly in the United States and Europe. Regarding non-significant results related to gender, they need viewing in the context of parenting styles of Chinese mothers.
IS GUTTACORE MORE EASILY REMOVED FROM THE ROOT CANAL THAN THERMAFIL? AN EX-VIVO STUDY.
Nevares, Giselle; de Albuquerque, Diana Santana; Bueno, Carlos Eduardo da Silveira; Cunha, Rodrigo Sanches
2015-01-01
GuttaCore is a new cross-linked gutta-percha carrier. Its handling time and ease of removal were compared with those of a plastic carrier (Thermafil) and the continuous wave of condensation technique (control). Forty-five maxillary central incisors were randomly divided 3 groups according to filling technique and retreatment was carried out in all samples with NiTi rotary files, hand files and ultrasonic inserts. Time required for filling removal was recorded. Roots were then split longitudinally and photographed under 5x magnification, and residual filling material was quantified. Removal time was significantly longer for Thermafil (7.10 minutes) than GuttaCore (2.91 minutes) and the control group (1.93 minutes) (p < 0.001). The amount of residual filling material did not differ among the groups: Thermafil 8.31%, GuttaCore 6.27 and control 8.68% (p > 0.05). In conclusion, replacing plastic core with cross-linked gutta-percha allows easier removal of carrier from the root canal. The remnants of filling material in all samples illustrate that retreatment remains a challenge in endodontics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
Effect of surgical hand scrub time on subsequent bacterial growth.
Wheelock, S M; Lookinland, S
1997-06-01
In this experimental study, the researchers evaluated the effect of surgical hand scrub time on subsequent bacterial growth and assessed the effectiveness of the glove juice technique in a clinical setting. In a randomized crossover design, 25 perioperative staff members scrubbed for two or three minutes in the first trial and vice versa in the second trial, after which the wore sterile surgical gloves for one hour under clinical conditions. The researchers then sampled the subjects' nondominant hands for bacterial growth, cultured aliquots from the sampling solution, and counted microorganisms. Scrubbing for three minutes produced lower mean log bacterial counts than scrubbing for two minutes. Although the mean bacterial count differed significantly (P = .02) between the two-minute and three-minute surgical hand scrub times, it fell below 0.5 log, which is the threshold for practical and clinical significance. This finding suggests that a two-minute surgical hand scrub is clinically as effective as a three-minute surgical had scrub. The glove juice technique demonstrated sensitivity and reliability in enumerating bacteria on the hands of perioperative staff members in a clinical setting.
Burley, Lisa M; Fell, Richard D; Saacke, Richard G
2008-08-01
We conducted research to examine the potential impacts ofcoumaphos, fluvalinate, and Apilife VAR (Thymol) on drone honey bee, Apis mellifera L. (Hymenoptera: Apidae), sperm viability over time. Drones were reared in colonies that had been treated with each miticide by using the dose recommended on the label. Drones from each miticide treatment were collected, and semen samples were pooled. The pooled samples from each treatment were subdivided and analyzed for periods of up to 6 wk. Random samples were taken from each treatment (n = 6 pools) over the 6-wk period. Sperm viability was measured using dual-fluorescent staining techniques. The exposure of drones to coumaphos during development and sexual maturation significantly reduced sperm viability for all 6 wk. Sperm viability significantly decreased from the initial sample to week 1 in control colonies, and a significant decrease in sperm viability was observed from week 5 to week 6 in all treatments and control. The potential impacts of these results on queen performance and failure are discussed.
Thermal Conductivity of Polymer/Nano-filler Blends
NASA Technical Reports Server (NTRS)
Ghose, Sayata; Watson, Kent A.; Delozier, Donovan M.; Working, Dennis C.; Connell, John W.; Smith, Joseph G.; Sun, Y. P.; Lin, Y.
2006-01-01
To improve the thermal conductivity of an ethylene vinyl acetate copolymer, Elvax 260 was compounded with three carbon based nano-fillers. Multiwalled carbon nanotubes (MWCNT), vapor grown carbon nanofibers (CNF) and expanded graphite (EG) were investigated. In an attempt to improve compatibility between the Elvax and nanofillers, MWCNTs and EGs were modified through non covalent and covalent attachment of alkyl groups. Ribbons were extruded to form samples in which the nanofillers were aligned, and samples were also fabricated by compression molding in which the nano-fillers were randomly oriented. The thermal properties were evaluated by DSC and TGA, and mechanical properties of the aligned samples were determined by tensile testing. The degree of dispersion and alignment of the nanoparticles were investigated using high-resolution scanning electron microscopy. Thermal conductivity measurements were performed using a Nanoflash technique. The thermal conductivity of the samples was measured in both the direction of alignment as well as perpendicular to that direction. The results of this study will be presented.
Walther, Charles; Jeremiasen, Martin; Rissler, Pehr; Johansson, Jan L M; Larsson, Marie S; Walther, Bruno S C S
2016-12-01
Background Sampling of submucosal lesions in the gastrointestinal tract through a flexible endoscope is a well-recognized clinical problem. One technique often used is endoscopic ultrasound-guided fine-needle aspiration, but it does not provide solid tissue biopsies with preserved architecture for histopathological evaluation. To obtain solid tissue biopsies from submucosal lesions, we have constructed a new endoscopic biopsy tool and compared it in a crossover study with the standard double cupped forceps. Methods Ten patients with endoscopically verified submucosal lesions were sampled. The endoscopist selected the position for the biopsies and used the instrument selected by randomization. After a biopsy was harvested, the endoscopist chose the next site for a biopsy and again used the instrument picked by randomization. A total of 6 biopsies, 3 with the forceps and 3 with the drill instrument, were collected in every patient. Results The drill instrument resulted in larger total size biopsies (mm 2 ; Mann-Whitney U test, P = .048) and larger submucosal part (%) of the biopsies (Mann-Whitney U test, P = .003) than the forceps. Two patients were observed because of chest pain and suspicion of bleeding in 24 hours. No therapeutic measures were necessary to be taken. Conclusion The new drill instrument for flexible endoscopy can safely deliver submucosal tissue samples from submucosal lesions in the upper gastrointestinal tract. © The Author(s) 2016.
Milker, Yvonne; Weinkauf, Manuel F G; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0-200 m) but far less accuracy of 130 m for deeper water depths (200-1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene.
Weinkauf, Manuel F. G.; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J.; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0–200 m) but far less accuracy of 130 m for deeper water depths (200–1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene. PMID:29166653
Oguz, Yuksel; Guler, Ismail; Erdem, Ahmet; Mutlu, Mehmet Firat; Gumuslu, Seyhan; Oktem, Mesut; Bozkurt, Nuray; Erdem, Mehmet
2018-03-23
To compare the effect of two different sperm preparation techniques, including swim-up and gradient methods on sperm deoxyribonucleic acid (DNA) fragmentation status of semen samples from unexplained and mild male factor subfertile patients undergoing intrauterine insemination (IUI). A prospective randomized study was conducted in 65 subfertile patients, including 34 unexplained and 31 male factor infertility to compare basal and post-procedure DNA fragmentation rates in swim-up and gradient techniques. Sperm DNA fragmentation rates were evaluated by a sperm chromatin dispersion (SCD) test in two portions of each sample of semen that was prepared with either swim-up or gradient techniques. Sperm motility and morphology were also assessed based on WHO 2010 criteria. Swim-up but not gradient method yielded a statistically significant reduction in the DNA fragmented sperm rate after preparation as compared to basal rates, in the semen samples of both unexplained (41.85 ± 22.04 vs. 28.58 ± 21.93, p < 0.001 for swim-up; and 41.85 ± 22.04 vs. 38.79 ± 22.30, p = 0.160 for gradient) and mild male factor (46.61 ± 19.38 vs. 30.32 ± 18.20, p < 0.001 for swim-up and 46.61 ± 19.38 vs. 44.03 ± 20.87, p = 0.470 for gradient) subgroups. Swim-up method significantly reduces sperm DNA fragmentation rates and may have some prognostic value on intrauterine insemination in patients with decreased sperm DNA integrity.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Ntozini, Robert; Marks, Sara J; Mangwadu, Goldberg; Mbuya, Mduduzi N N; Gerema, Grace; Mutasa, Batsirai; Julian, Timothy R; Schwab, Kellogg J; Humphrey, Jean H; Zungu, Lindiwe I
2015-12-15
Access to water and sanitation are important determinants of behavioral responses to hygiene and sanitation interventions. We estimated cluster-specific water access and sanitation coverage to inform a constrained randomization technique in the SHINE trial. Technicians and engineers inspected all public access water sources to ascertain seasonality, function, and geospatial coordinates. Households and water sources were mapped using open-source geospatial software. The distance from each household to the nearest perennial, functional, protected water source was calculated, and for each cluster, the median distance and the proportion of households within <500 m and >1500 m of such a water source. Cluster-specific sanitation coverage was ascertained using a random sample of 13 households per cluster. These parameters were included as covariates in randomization to optimize balance in water and sanitation access across treatment arms at the start of the trial. The observed high variability between clusters in both parameters suggests that constraining on these factors was needed to reduce risk of bias. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
Measurement of breast-tissue x-ray attenuation by spectral mammography: solid lesions
NASA Astrophysics Data System (ADS)
Fredenberg, Erik; Kilburn-Toppin, Fleur; Willsher, Paula; Moa, Elin; Danielsson, Mats; Dance, David R.; Young, Kenneth C.; Wallis, Matthew G.
2016-04-01
Knowledge of x-ray attenuation is essential for developing and evaluating x-ray imaging technologies. For instance, techniques to distinguish between cysts and solid tumours at mammography screening would be highly desirable to reduce recalls, but the development requires knowledge of the x-ray attenuation for cysts and tumours. We have previously measured the attenuation of cyst fluid using photon-counting spectral mammography. Data on x-ray attenuation for solid breast lesions are available in the literature, but cover a relatively wide range, likely caused by natural spread between samples, random measurement errors, and different experimental conditions. In this study, we have adapted a previously developed spectral method to measure the linear attenuation of solid breast lesions. A total of 56 malignant and 5 benign lesions were included in the study. The samples were placed in a holder that allowed for thickness measurement. Spectral (energy-resolved) images of the samples were acquired and the image signal was mapped to equivalent thicknesses of two known reference materials, which can be used to derive the x-ray attenuation as a function of energy. The spread in equivalent material thicknesses was relatively large between samples, which is likely to be caused mainly by natural variation and only to a minor extent by random measurement errors and sample inhomogeneity. No significant difference in attenuation was found between benign and malignant solid lesions. The separation between cyst-fluid and tumour attenuation was, however, significant, which suggests it may be possible to distinguish cystic from solid breast lesions, and the results lay the groundwork for a clinical trial. In addition, the study adds a relatively large sample set to the published data and may contribute to a reduction in the overall uncertainty in the literature.
Development and application of the maximum entropy method and other spectral estimation techniques
NASA Astrophysics Data System (ADS)
King, W. R.
1980-09-01
This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Dietz, Pavel; Striegel, Heiko; Franke, Andreas G; Lieb, Klaus; Simon, Perikles; Ulrich, Rolf
2013-01-01
To estimate the 12-month prevalence of cognitive-enhancing drug use. Paper-and-pencil questionnaire that used the randomized response technique. University in Mainz, Germany. A total of 2569 university students who completed the questionnaire. An anonymous, specialized questionnaire that used the randomized response technique was distributed to students at the beginning of classes and was collected afterward. From the responses, we calculated the prevalence of students taking drugs only to improve their cognitive performance and not to treat underlying mental disorders such as attention-deficit-hyperactivity disorder, depression, and sleep disorders. The estimated 12-month prevalence of using cognitive-enhancing drugs was 20%. Prevalence varied by sex (male 23.7%, female 17.0%), field of study (highest in students studying sports-related fields, 25.4%), and semester (first semester 24.3%, beyond first semester 16.7%). To our knowledge, this is the first time that the randomized response technique has been used to survey students about cognitive-enhancing drug use. Using the randomized response technique, our questionnaire provided data that showed a high 12-month prevalence of cognitive-enhancing drug use in German university students. Our study suggests that other direct survey techniques have underestimated the use of these drugs. Drug prevention programs need to be established at universities to address this issue. © 2013 Pharmacotherapy Publications, Inc.
Montserrat-Bosch, Marta; Figueiredo, Rui; Nogueira-Magalhães, Pedro; Arnabat-Dominguez, Josep; Valmaseda-Castellón, Eduard; Gay-Escoda, Cosme
2014-07-01
To compare the efficacy and complication rates of two different techniques for inferior alveolar nerve blocks (IANB). A randomized, triple-blind clinical trial comprising 109 patients who required lower third molar removal was performed. In the control group, all patients received an IANB using the conventional Halsted technique, whereas in the experimental group, a modified technique using a more inferior injection point was performed. A total of 100 patients were randomized. The modified technique group showed a significantly higher onset time in the lower lip and chin area, and was frequently associated to a lingual electric discharge sensation. Three failures were recorded, 2 of them in the experimental group. No relevant local or systemic complications were registered. Both IANB techniques used in this trial are suitable for lower third molar removal. However, performing an inferior alveolar nerve block in a more inferior position (modified technique) extends the onset time, does not seem to reduce the risk of intravascular injections and might increase the risk of lingual nerve injuries.
Ragunath, K; Krasner, N; Raman, V S; Haqqani, M T; Cheung, W Y
2003-12-01
The value of methylene blue-directed biopsies (MBDB) in detecting specialized intestinal metaplasia and dysplasia in Barrett's esophagus remains unclear. The aim of this study was to compare the accuracy of MBDB with random biopsy in detecting intestinal metaplasia and dysplasia in patients with Barrett's esophagus. A prospective, randomized, cross-over trial was undertaken to compare MBDB with random biopsy in patients with Barrett's esophagus segments 3 cm or more in length without macroscopic evidence of dysplasia or cancer. Dysplasia was graded as: indefinite for dysplasia, low-grade dysplasia, high-grade dysplasia, or carcinoma, and was reported in a blinded fashion. Fifty-seven patients were recruited, 44 of whom were male. A total of 1,269 biopsies were taken (MBDB-651, random biopsie-618). Analysis of the results by per-biopsy protocol showed that the MBDB technique diagnosed significantly more specialized intestinal metaplasia (75 %) compared to the random biopsy technique (68 %; P = 0.032). The sensitivity and specificity rates of MBDB for diagnosing specialized intestinal metaplasia were 91 % (95 % CI, 88 - 93 %) and 43 % (95 % CI, 36 - 51 %), respectively. The sensitivity and specificity rates of MBDB for diagnosing dysplasia or carcinoma were 49 % (95 % CI, 38 - 61 %) and 85 % (95 % CI, 82 - 88 %), respectively. There were no significant differences in the diagnosis of dysplasia and carcinoma - MBDB 12 %, random biopsy 10 %. The methylene blue staining pattern appeared to have an influence on the detection of specialized intestinal metaplasia and dysplasia/carcinoma. Dark blue staining was associated with increased detection of specialized intestinal metaplasia (P < 0.0001), and heterogeneous staining (P = 0.137) or no staining (P = 0.005) were associated with dysplasia and/or carcinoma detection. The MBDB technique prolonged the endoscopy examination by an average of 6 min. The diagnostic accuracy of the MBDB technique was superior to that of the random biopsy technique for identifying specialized intestinal metaplasia, but not dysplasia or carcinoma. The intensity of methylene blue staining has an influence on the detection of specialized intestinal metaplasia and dysplasia or carcinoma, which may help in targeting the biopsies. Although MBDB prolongs the endoscopy procedure slightly, it is a safe and well-tolerated procedure. Further clinical studies on the MBDB technique exclusively in endoscopically normal dysplastic Barrett's esophagus are needed.
A Mixed Effects Randomized Item Response Model
ERIC Educational Resources Information Center
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Hamaker, E L; Asparouhov, T; Brose, A; Schmiedek, F; Muthén, B
2018-04-06
With the growing popularity of intensive longitudinal research, the modeling techniques and software options for such data are also expanding rapidly. Here we use dynamic multilevel modeling, as it is incorporated in the new dynamic structural equation modeling (DSEM) toolbox in Mplus, to analyze the affective data from the COGITO study. These data consist of two samples of over 100 individuals each who were measured for about 100 days. We use composite scores of positive and negative affect and apply a multilevel vector autoregressive model to allow for individual differences in means, autoregressions, and cross-lagged effects. Then we extend the model to include random residual variances and covariance, and finally we investigate whether prior depression affects later depression scores through the random effects of the daily diary measures. We end with discussing several urgent-but mostly unresolved-issues in the area of dynamic multilevel modeling.