D. Erran Seaman
1997-01-01
We monitored the threatened Northern Spotted Owl (Strix occidentalis caurina) in Olympic National Park from 1992 through 1996. We used a stratified random sampling scheme to survey 35 plots totaling 236 km?, approximately 10 percent of the forested area of the park.
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Johnson, J. K.
1979-01-01
An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
NASA Technical Reports Server (NTRS)
Sharp, J. M.; Thomas, R. W.
1975-01-01
How LANDSAT imagery can be cost effectively employed to augment an operational hydrologic model is described. Attention is directed toward the estimation of snow water content, a major predictor variable in the volumetric runoff forecasting model. A stratified double sampling scheme is supplemented with qualitative and quantitative analyses of existing operations to develop a comparison between the existing and satellite-aided approaches to snow water content estimation. Results show a decided advantage for the LANDSAT-aided approach.
NASA Astrophysics Data System (ADS)
Molinario, G.; Hansen, M.; Potapov, P.
2016-12-01
High resolution satellite imagery obtained from the National Geospatial Intelligence Agency through NASA was used to photo-interpret sample areas within the DRC. The area sampled is a stratifcation of the forest cover loss from circa 2014 that either occurred completely within the previosly mapped homogenous area of the Rural Complex, at it's interface with primary forest, or in isolated forest perforations. Previous research resulted in a map of these areas that contextualizes forest loss depending on where it occurs and with what spatial density, leading to a better understading of the real impacts on forest degradation of livelihood shifting cultivation. The stratified random sampling approach of these areas allows the characterization of the constituent land cover types within these areas, and their variability throughout the DRC. Shifting cultivation has a variable forest degradation footprint in the DRC depending on many factors that drive it, but it's role in forest degradation and deforestation had been disputed, leading us to investigate and quantify the clearing and reuse rates within the strata throughout the country.
Quantum image pseudocolor coding based on the density-stratified method
NASA Astrophysics Data System (ADS)
Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na
2015-05-01
Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.
Recent progresses in outcome-dependent sampling with failure time data.
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2017-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case-cohort design, generalized case-cohort design, stratified case-cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.
Recent progresses in outcome-dependent sampling with failure time data
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2016-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design. PMID:26759313
The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.
Debry, Ronald W
2005-06-01
The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.
A cost-effectiveness comparison of existing and Landsat-aided snow water content estimation systems
NASA Technical Reports Server (NTRS)
Sharp, J. M.; Thomas, R. W.
1975-01-01
This study describes how Landsat imagery can be cost-effectively employed to augment an operational hydrologic model. Attention is directed toward the estimation of snow water content, a major predictor variable in the volumetric runoff forecasting model presently used by the California Department of Water Resources. A stratified double sampling scheme is supplemented with qualitative and quantitative analyses of existing operations to develop a comparison between the existing and satellite-aided approaches to snow water content estimation. Results show a decided advantage for the Landsat-aided approach.
Stratified random selection of watersheds allowed us to compare geographically-independent classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme within the Northern Lakes a...
Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.
Anisimov, Vladimir V
2011-01-01
This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.
Little, R; Wheeler, K; Edge, S
2017-02-11
This paper examines farmer attitudes towards the development of a voluntary risk-based trading scheme for cattle in England as a risk mitigation measure for bovine tuberculosis (bTB). The research reported here was commissioned to gather evidence on the type of scheme that would have a good chance of success in improving the information farmers receive about the bTB risk of cattle they buy. Telephone interviews were conducted with a stratified random sample of 203 cattle farmers in England, splitting the interviews equally between respondents in the high-risk area and low-risk area for bTB. Supplementary interviews and focus groups with farmers were also carried out across the risk areas. Results suggest a greater enthusiasm for a risk-based trading scheme in low-risk areas compared with high-risk areas and among members of breed societies and cattle health schemes. Third-party certification of herds by private vets or the Animal and Plant Health Agency were regarded as the most credible source, with farmer self-certification being favoured by sellers, but being regarded as least credible by buyers. Understanding farmers' attitudes towards voluntary risk-based trading is important to gauge likely uptake, understand preferences for information provision and to assist in monitoring, evaluating and refining the scheme once established. British Veterinary Association.
Well-balanced Schemes for Gravitationally Stratified Media
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2015-10-01
We present a well-balanced scheme for the Euler equations with gravitation. The scheme is capable of maintaining exactly (up to machine precision) a discrete hydrostatic equilibrium without any assumption on a thermodynamic variable such as specific entropy or temperature. The well-balanced scheme is based on a local hydrostatic pressure reconstruction. Moreover, it is computationally efficient and can be incorporated into any existing algorithm in a straightforward manner. The presented scheme improves over standard ones especially when flows close to a hydrostatic equilibrium have to be simulated. The performance of the well-balanced scheme is demonstrated on an astrophysically relevant application: a toy model for core-collapse supernovae.
An evaluation of flow-stratified sampling for estimating suspended sediment loads
Robert B. Thomas; Jack Lewis
1995-01-01
Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...
Anetoh, Maureen Ugonwa; Jibuaku, Chiamaka Henrietta; Nduka, Sunday Odunke; Uzodinma, Samuel Uchenna
2017-01-01
Tertiary Institutions' Social Health Insurance Programme (TISHIP) is an arm of the National Health Insurance Scheme (NHIS), which provides quality healthcare to students in Nigerian higher institutions. The success of this scheme depends on the students' knowledge and awareness of its existence as well as the level of its implementation by healthcare providers. This study was therefore designed to assess students' knowledge and attitude towards TISHIP and its implementation level among health workers in Nnamdi Azikiwe University Medical Centre. Using a stratified random sampling technique, 420 undergraduate students of Nnamdi Azikiwe University, Awka were assessed on their level of awareness and general assessment of TISHIP through an adapted and validated questionnaire instrument. The level of implementation of the scheme was then assessed among 50 randomly selected staff of the University Medical Center. Data collected were analyzed using Statistical Package for Social Sciences (SPSS) version 20 software. Whereas the students in general, showed a high level of TISHIP awareness, more than half of them (56.3%) have never benefited from the scheme with 52.8% showing dissatisfaction with the quality of care offered with the scheme. However, an overwhelming number of the students (87.9%) opined that the scheme should continue. On the other hand, the University Medical Centre staff responses showed a satisfactory scheme implementation. The study found satisfactory TISHIP awareness with poor attitude among Nnamdi Azikiwe University students. Furthermore, the University Medical Centre health workers showed a strong commitment to the objectives of the scheme.
Sun, Mei; Shen, Jay J; Li, Chengyue; Cochran, Christopher; Wang, Ying; Chen, Fei; Li, Pingping; Lu, Jun; Chang, Fengshui; Li, Xiaohong; Hao, Mo
2016-08-22
This study aimed to measure the poverty head count ratio and poverty gap of rural Yanbian in order to examine whether China's New Rural Cooperative Medical Scheme has alleviated its medical impoverishment and to compare the results of this alternative approach with those of a World Bank approach. This cross-sectional study was based on a stratified random sample survey of 1,987 households and 6,135 individuals conducted in 2008 across eight counties in Yanbian Korean Autonomous Prefecture, Jilin province, China. A new approach was developed to define and identify medical impoverishment. The poverty head count ratio, relative poverty gap, and average poverty gap were used to measure medical impoverishment. Changes in medical impoverishment after the reimbursement under the New Rural Cooperative Medical Scheme were also examined. The government-run New Rural Cooperative Medical Scheme reduced the number of medically impoverished households by 24.6 %, as well as the relative and average gaps by 37.3 % and 38.9 %, respectively. China's New Rural Cooperative Medical Scheme has certain positive but limited effects on alleviating medical impoverishment in rural Yanbian regardless of how medical impoverishment is defined and measured. More governmental and private-sector efforts should therefore be encouraged to further improve the system in terms of financing, operation, and reimbursement policy.
Numerical simulation of stratified flows from laboratory experiments to coastal ocean
NASA Astrophysics Data System (ADS)
Fraunie, Philippe
2014-05-01
Numeric modeling of a flow past vertical strip uniformly towing with permanent velocity in horizontal direction in a linearly stratified talk which was based on a finite differences solver adapted to the low Reynolds Navier-Stokes equation with transport equation for salinity (LES simulation [6]) has demonstrated reasonable agreement with data of schlieren visualization, density marker and probe measurements of internal wave fields. Another approach based on two different numerical methods for one specific case of stably stratified incompressible flow was developed, using the compact finite-difference discretizations. The numerical scheme itself follows the principle of semi-discretisation, with high order compact discretisation in space, while the time integration is carried out by the Strong Stability Preserving Runge-Kutta scheme. Results were compared against the reference solution obtained by the AUSM finite volume method [7]. The test case allowed demonstrating the ability of selected numerical methods to represent stably stratified flows over horizontal strip [4] and hill type 2D obstacles [1, 3] with generation of internal waves. From previous LES [4] and RANS [8] realistic simulations code, the ability of research codes to reproduce field observations is discussed. ACKNOWLEDGMENTS This research work was supported by Region Provence Alpes Côte d'Azur - Modtercom project, the Research Plan MSM 6840770010 of the Ministry of education of Czech Republic and the Russian Foundation for Basic Research (grant 12-01-00128). REFERENCES 1. Chashechkin Yu.D., Mitkin V.V. Experimental study of a fine structure of 2D wakes and mixing past an obstacle in a continuously stratified fluid // Dynamics of Atmosphere and Oceans. 2001. V. 34. P. 165-187. 2. Chashechkin, Yu. D. Hydrodynamics of a sphere in a stratified fluid // Fluid Dyn. 1989. V.24(1) P. 1-7. 3. Mitkin V. V., Chashechkin Yu. D. Transformation of hanging discontinuities into vortex systems in a stratified flow behind a cylinder // 2007. Fluid Dyn. V. 42 (1). P. 12-23. 4. Bardakov R. N., Mitkin V. V., Chashechkin Yu. D. Fine structure of a stratified flow near a flat-plate surface // J. Appl. Mech. Tech. Phys. 2007. V. 48(6) P. 840-851. 5. Chashechkin Yu. D., Mitkin V. V. An effect of a lift force on the structure of attached internal waves in a continuously stratified fluid // Dokl. Phys. 2001. V. 46 (6). P. 425-428. 6. Houcine H., Chashechkin Yu.D, Fraunié P., Fernando H.J.S., Gharbi A., Lili T. Numerical modeling of the generation of internal waves by uniform stratified flow over a thin vertical barrier // Int J. Num Methods in Fluids. 2012. V.68(4). P. 451-466. DOI: 10.1002/fld.2513 7. Bodnar T., Benes , Fraunié P., Kozel K.. Application of Compact Finite-Difference Schemes to Simulations of Stably Stratified Fluid Flows. Applied Mathematics and Computation 219 : 3336-3353 2012. doi:10.1016/j.amc.2011.08.058 8. Schaeffer A. Molcard A. Forget P. Fraunié P. Garreau P. Generation mechanisms for mesoscale eddies in the Gulf of Lions: radar observation and modelling. Ocean Dynamics vol 61, 10, pp1587-1609, 2011. DOI.1007/s10236-011-0482-8.
Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo
2013-04-01
A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.
NASA Technical Reports Server (NTRS)
Pitts, D. E.; Badhwar, G.
1980-01-01
The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.
State-of-the-art practices in farmland biodiversity monitoring for North America and Europe.
Herzog, Felix; Franklin, Janet
2016-12-01
Policy makers and farmers need to know the status of farmland biodiversity in order to meet conservation goals and evaluate management options. Based on a review of 11 monitoring programs in Europe and North America and on related literature, we identify the design choices or attributes of a program that balance monitoring costs and usefulness for stakeholders. A useful program monitors habitats, vascular plants, and possibly faunal groups (ecosystem service providers, charismatic species) using a stratified random sample of the agricultural landscape, including marginal and intensive regions. The size of landscape samples varies with the grain of the agricultural landscape; for example, samples are smaller in Europe and larger in North America. Raw data are collected in a rolling survey, which distributes sampling over several years. Sufficient practical experience is now available to implement broad monitoring schemes on both continents. Technological developments in remote sensing, metagenomics, and social media may offer new opportunities for affordable farmland biodiversity monitoring and help to lower the overall costs of monitoring programs.
Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres
NASA Astrophysics Data System (ADS)
Judge, Philip G.
2017-12-01
We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.
Jing, Limei; Chen, Ru; Jing, Lisa; Qiao, Yun; Lou, Jiquan; Xu, Jing; Wang, Junwei; Chen, Wen; Sun, Xiaoming
2017-07-01
Basic Medical Insurance (BMI) has changed remarkably over time in China because of health reforms that aim to achieve universal coverage and better health care with adequate efforts by increasing subsidies, reimbursement, and benefits. In this paper, we present the development of BMI, including financing and operation, with a systematic review. Meanwhile, Pudong New Area in Shanghai was chosen as a typical BMI sample for its coverage and management; a stratified cluster sampling survey together with an ordinary logistic regression model was used for the analysis. Enrolee satisfaction and the factors associated with enrolee satisfaction with BMI were analysed. We found that the reenrolling rate superficially improved the BMI coverage and nearly achieved universal coverage. However, BMI funds still faced dual contradictions of fund deficit and insured under compensation, and a long-term strategy is needed to realize the integration of BMI schemes with more homogeneous coverage and benefits. Moreover, Urban Resident Basic Medical Insurance participants reported a higher rate of dissatisfaction than other participants. The key predictors of the enrolees' satisfaction were awareness of the premium and compensation, affordability of out-of-pocket costs, and the proportion of reimbursement. These results highlight the importance that the Chinese government takes measures, such as strengthening BMI fund management, exploring mixed payment methods, and regulating sequential medical orders, to develop an integrated medical insurance system of universal coverage and vertical equity while simultaneously improving enrolee satisfaction. Copyright © 2017 John Wiley & Sons, Ltd.
Unsteady Shear Disturbances Within a Two Dimensional Stratified Flow
NASA Technical Reports Server (NTRS)
Yokota, Jeffrey W.
1992-01-01
The origin and evolution of shear disturbances within a stratified, inviscid, incompressible flow are investigated numerically by a Clebsch/Weber decomposition based scheme. In contrast to homogeneous flows, within which vorticity can be redistributed but not generated, the presence of a density stratification can render an otherwise irrotational flow vortical. In this work, a kinematic decomposition of the unsteady Euler equations separates the unsteady velocity field into rotational and irrotational components. The subsequent evolution of these components is used to study the influence various velocity disturbances have on both stratified and homogeneous flows. In particular, the flow within a two-dimensional channel is used to investigate the evolution of rotational disturbances, generated or convected, downstream from an unsteady inflow condition. Contrasting simulations of both stratified and homogeneous flows are used to distinguish between redistributed inflow vorticity and that which is generated by a density stratification.
Assessing technical performance at diverse ambulatory care sites.
Osterweis, M; Bryant, E
1978-01-01
The purpose of the large study reported here was to develop and test methods for assessing the quality of health care that would be broadly applicable to diverse ambulatory care organizations for periodic comparative review. Methodological features included the use of an age-sex stratified random sampling scheme, dependence on medical records as the source of data, a fixed study period year, use of Kessner's tracer methodology (including not only acute and chronic diseases but also screening and immunization rates as indicators), and a fixed tracer matrix at all test sites. This combination of methods proved more efficacious in estimating certain parameters for the total patient populations at each site (including utilization patterns, screening, and immunization rates) and the process of care for acute conditions than it did in examining the process of care for the selected chronic condition. It was found that the actual process of care at all three sites for the three acute conditions (streptococcal pharyngitis, urinary tract infection, and iron deficiency anemia) often differed from the expected process in terms of both diagnostic procedures and treatment. For hypertension, the chronic disease tracer, medical records were frequently a deficient data source from which to draw conclusions about the adequacy of treatment. Several aspects of the study methodology were found to be detrimental to between-site comparisons of the process of care for chronic disease management. The use of an age-sex stratified random sampling scheme resulted in the identification of too few cases of hypertension at some sites for analytic purposes, thereby necessitating supplementary sampling by diagnosis. The use of a fixed study period year resulted in an arbitrary starting point in the course of the disease. Furthermore, in light of the diverse sociodemographic characteristics of the patient populations, the use of a fixed matrix of tracer conditions for all test sites is questionable. The discussion centers on these and other problems encountered in attempting to compare technical performance within diverse ambulatory care organizations and provides some guidelines as to the utility of alternative methods for assessing the quality of health care.
Agago, Tesfamichael Alaro; Woldie, Mirkuzie; Ololo, Shimeles
2014-07-01
Cost-sharing between beneficiaries and governments is critical to achieve universal health care coverage. To address this, Ethiopia is currently introducing Social Health Insurance. However, there has been limited evidence on willingness to join the newly proposed insurance scheme in the country. The purpose of this study is to assess willingness to join and pay for the scheme among teachers in Wolaita Sodo Town government educational institutions, South Ethiopia. A cross-sectional study was conducted from February 5 to March 10, 2012 on 335 teachers. Stratified simple random sampling technique was used and data were collected using structured interviewer administered questionnaire. Binary and multiple logistic regressions were used to estimate the crude and adjusted odds ratios for willingness to pay. Three hundred twenty-eight teachers participated in the study with response rate of 98%. About 55% of the teachers had never heard of any type of health insurance scheme. However, 74.4% of them were willing to pay for the suggested insurance scheme. About 47% of those who were willing to pay agreed to contribute greater than or equal to 4% of their monthly salaries. Willingness to pay was more likely among those who had heard about health insurance, had previous history of inability to pay for medical bills and achieved higher educational status. The majority of the teachers were willing to join social health insurance; however, adequate awareness creation and discussion should be made with all employees at various levels for the successful implementation of the scheme.
Number of pins in two-stage stratified sampling for estimating herbage yield
William G. O' Regan; C. Eugene Conrad
1975-01-01
In a two-stage stratified procedure for sampling herbage yield, plots are stratified by a pin frame in stage one, and clipped. In stage two, clippings from selected plots are sorted, dried, and weighed. Sample size and distribution of plots between the two stages are determined by equations. A way to compute the effect of number of pins on the variance of estimated...
NASA Astrophysics Data System (ADS)
Ginzburg, Irina
2016-02-01
In this Comment on the recent work (Zhu and Ma, 2013) [11] by Zhu and Ma (ZM) we first show that all three local gray Lattice Boltzmann (GLB) schemes in the form (Zhu and Ma, 2013) [11]: GS (Chen and Zhu, 2008; Gao and Sharma, 1994) [1,4], WBS (Walsh et al., 2009) [12] and ZM, fail to get constant Darcy's velocity in series of porous blocks. This inconsistency is because of their incorrect definition of the macroscopic velocity in the presence of the heterogeneous momentum exchange, while the original WBS model (Walsh et al., 2009) [12] does this properly. We improve the GS and ZM schemes for this and other related deficiencies. Second, we show that the ;discontinuous velocity; they recover on the stratified interfaces with their WBS scheme is inherent, in different degrees, to all LBE Brinkman schemes, including ZM scheme. None of them guarantees the stress and the velocity continuity by their implicit interface conditions, even in the frame of the two-relaxation-times (TRT) collision operator where these two properties are assured in stratified Stokes flow, Ginzburg (2007) [5]. Third, the GLB schemes are presented in work (Zhu and Ma, 2013) [11] as the alternative ones to direct, Brinkman-force based (BF) schemes (Freed, 1998; Nie and Martys, 2007) [3,8]. Yet, we show that the BF-TRT scheme (Ginzburg, 2008) [6] gets the solutions of any of the improved GLB schemes for specific, viscosity-dependent choice of its one or two local relaxation rates. This provides the principal difference between the GLB and BF: while the BF may respect the linearity of the Stokes-Brinkman equation rigorously, the GLB-TRT cannot, unless it reduces to the BF via the inverse transform of the relaxation rates. Furthermore, we show that, in limited parameter space, ;gray; schemes may run one another. From the practical point of view, permeability values obtained with the GLB are viscosity-dependent, unlike with the BF. Finally, the GLB shares with the BF a so-called anisotropy (Ginzburg, 2008; Nie and Martys, 2007) [6,8], that is, flow-direction-dependency in their effective viscosity corrections, related to the discretized spatial variation of the resistance forcing.
A nutrient injection scheme for in situ bio-remediation.
Lin, C H; Kuo, M C Tom; Su, C Y; Liang, K F; Han, Y L
2012-01-01
Geological layers often have different hydraulic conductivities. This paper presents an innovative design for delivering aqueous substrates and nutrients to various stratified layers at desired rates during in-situ bio-stimulation. The new delivery system consists of intermittent porous tubes connected in series with impermeable polyethylene tubes that run horizontally in each stratified layer of a contaminated aquifer. Results of the tracer test indicated that the distribution of tritium through each porous tube was fairly uniform. A mathematical model was also developed to calculate the distribution of water flow through each porous tube. By controlling the permeability and the length of porous tubes placed in stratified layers, the new design provides a means to selectively deliver nutrients to various layers at desired rates according to aquifer heterogeneity.
Guo, Chao; Wang, Zhenjie; Li, Ning; Chen, Gong; Zheng, Xiaoying
2017-12-01
To estimate the prevalence of, and association between, co-morbid visual and psychiatric disabilities among elderly (>65 years-of-age) persons in China. Random representative samples were obtained using multistage, stratified, cluster sampling, with probabilities proportional to size. Standard weighting procedures were used to construct sample weights that reflected this multistage, stratified cluster sampling survey scheme. Logistic regression models were used to elucidate associations between visual and psychiatric disabilities. Among the Chinese elderly, >160,000 persons have co-morbid visual and psychiatric disabilities. The weighted prevalence among this cohort is 123.7 per 100,000 persons. A higher prevalence of co-morbid visual and psychiatric disabilities was found in the oldest-old (p<0.001); women (65-79 years-of-age, p=0.001; ≥80 years-of-age, p=0.004); illiterate (65-79 years-of-age, p<0.001; ≥80 years-of-age, p=0.02); and single elders (65-79 years-of-age, p=0.01; ≥80 years-of-age, p=0.001). Presence of a visual disability was significantly associated with a higher risk of having a psychiatric disability among persons aged ≥80 years-of-age [adjusted odds ratio, 1.24; 95% confidence interval (CI), 1.03-1.54]. A significant number of Chinese elderly persons were living with co-morbid visual and psychiatric disabilities. To address the challenge of these co-morbid disorders among Chinese elders, it is incumbent upon the government to implement additional and more comprehensive prevention and rehabilitation strategies for health-care systems, reinforce health promotion among the elderly, and improve accessibility to health-care services.
Stratified charge rotary engine critical technology enablement. Volume 2: Appendixes
NASA Technical Reports Server (NTRS)
Irion, C. E.; Mount, R. E.
1992-01-01
This second volume of appendixes is a companion to Volume 1 of this report which summarizes results of a critical technology enablement effort with the stratified charge rotary engine (SCRE) focusing on a power section of 0.67 liters (40 cu. in.) per rotor in single and two rotor versions. The work is a continuation of prior NASA Contracts NAS3-23056 and NAS3-24628. Technical objectives are multi-fuel capability, including civil and military jet fuel and DF-2, fuel efficiency of 0.355 Lbs/BHP-Hr. at best cruise condition above 50 percent power, altitude capability of up to 10Km (33,000 ft.) cruise, 2000 hour TBO and reduced coolant heat rejection. Critical technologies for SCRE's that have the potential for competitive performance and cost in a representative light-aircraft environment were examined. Objectives were: the development and utilization of advanced analytical tools, i.e. higher speed and enhanced three dimensional combustion modeling; identification of critical technologies; development of improved instrumentation; and to isolate and quantitatively identify the contribution to performance and efficiency of critical components or subsystems. A family of four-stage third-order explicit Runge-Kutta schemes is derived that required only two locations and has desirable stability characteristics. Error control is achieved by embedding a second-order scheme within the four-stage procedure. Certain schemes are identified that are as efficient and accurate as conventional embedded schemes of comparable order and require fewer storage locations.
Validation of an In-Water, Tower-Shading Correction Scheme
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Doyle, John P.; Zibordi, Giuseppe; vanderLinde, Dirk
2003-01-01
Large offshore structures used for the deployment of optical instruments can significantly perturb the intensity of the light field surrounding the optical measurement point, where different portions of the visible spectrum are subject to different shadowing effects. These effects degrade the quality of the acquired optical data and can reduce the accuracy of several derived quantities, such as those obtained by applying bio-optical algorithms directly to the shadow-perturbed data. As a result, optical remote sensing calibration and validation studies can be impaired if shadowing artifacts are not fully accounted for. In this work, the general in-water shadowing problem is examined for a particular case study. Backward Monte Carlo (MC) radiative transfer computations- performed in a vertically stratified, horizontally inhomogeneous, and realistic ocean-atmosphere system are shown to accurately simulate the shadow-induced relative percent errors affecting the radiance and irradiance data profiles acquired close to an oceanographic tower. Multiparameter optical data processing has provided adequate representation of experimental uncertainties allowing consistent comparison with simulations. The more detailed simulations at the subsurface depth appear to be essentially equivalent to those obtained assuming a simplified ocean-atmosphere system, except in highly stratified waters. MC computations performed in the simplified system can be assumed, therefore, to accurately simulate the optical measurements conducted under more complex sampling conditions (i.e., within waters presenting moderate stratification at most). A previously reported correction scheme, based on the simplified MC simulations, and developed for subsurface shadow-removal processing of in-water optical data taken close to the investigated oceanographic tower, is then validated adequately under most experimental conditions. It appears feasible to generalize the present tower-specific approach to solve other optical sensor shadowing problems pertaining to differently shaped deployment platforms, and also including surrounding structures and instrument casings.
Stratified flows with variable density: mathematical modelling and numerical challenges.
NASA Astrophysics Data System (ADS)
Murillo, Javier; Navas-Montilla, Adrian
2017-04-01
Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux-ADER schemes with application to hyperbolic conservation laws with geometric source terms, J. Comput. Phys. 317 (2016) 108-147. J. Murillo and A. Navas-Montilla, A comprehensive explanation and exercise of the source terms in hyperbolic systems using Roe type solutions. Application to the 1D-2D shallow water equations, Advances in Water Resources 98 (2016) 70-96.
A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...
Vikram, K; Sharma, A K; Kannan, A T
2013-09-01
Janani Suraksha Yojana (JSY), a conditional cash transfer scheme introduced to improve the institutional delivery rates and thereby reduce the maternal and infant mortality was implemented in all States and Union Territories of India from 2007. The present study was carried out to identify the beneficiary level factors of utilization of JSY scheme in urban slums and resettlement colonies of trans-Yamuna area of Delhi. A cross-sectional community based survey was done of mothers of infants in the selected areas of the two districts by stratified random sampling on a population proportionate basis. Socio-demographic factors, antenatal services availed and distance of nearest health facility were studied. Outcome variable, a beneficiary, was a woman who had ever interacted with the ASHA of her area during the antenatal period of previous pregnancy and had child birth in an institution. Descriptive tables were drawn; univariate analysis followed by multiple logistic regression was applied for identifying the predictors for availing the benefits. Of the 469 mothers interviewed, 333 (71%) had institutional delivery, 128 (27.3%) had benefited from JSY scheme and 68 (14.5%) had received cash benefits of JSY. Belonging to Hindu religion and having had more than 6 antenatal check ups were the significant predictors of availing the benefits of JSY. There is a need to improve the awareness among urban slum population about the utilization of JSY scheme. Targeting difficult to access areas with special measures and encouraging more antenatal visits were essential, prerequisites to improve the impact of JSY.
Spatial and Temporal Dynamics of Pacific Oyster Hemolymph Microbiota across Multiple Scales
Lokmer, Ana; Goedknegt, M. Anouk; Thieltges, David W.; Fiorentino, Dario; Kuenzel, Sven; Baines, John F.; Wegner, K. Mathias
2016-01-01
Unveiling the factors and processes that shape the dynamics of host associated microbial communities (microbiota) under natural conditions is an important part of understanding and predicting an organism's response to a changing environment. The microbiota is shaped by host (i.e., genetic) factors as well as by the biotic and abiotic environment. Studying natural variation of microbial community composition in multiple host genetic backgrounds across spatial as well as temporal scales represents a means to untangle this complex interplay. Here, we combined a spatially-stratified with a longitudinal sampling scheme within differentiated host genetic backgrounds by reciprocally transplanting Pacific oysters between two sites in the Wadden Sea (Sylt and Texel). To further differentiate contingent site from host genetic effects, we repeatedly sampled the same individuals over a summer season to examine structure, diversity and dynamics of individual hemolymph microbiota following experimental removal of resident microbiota by antibiotic treatment. While a large proportion of microbiome variation could be attributed to immediate environmental conditions, we observed persistent effects of antibiotic treatment and translocation suggesting that hemolymph microbial community dynamics is subject to within-microbiome interactions and host population specific factors. In addition, the analysis of spatial variation revealed that the within-site microenvironmental heterogeneity resulted in high small-scale variability, as opposed to large-scale (between-site) stability. Similarly, considerable within-individual temporal variability was in contrast with the overall temporal stability at the site level. Overall, our longitudinal, spatially-stratified sampling design revealed that variation in hemolymph microbiota is strongly influenced by site and immediate environmental conditions, whereas internal microbiome dynamics and oyster-related factors add to their long-term stability. The combination of small and large scale resolution of spatial and temporal observations therefore represents a crucial but underused tool to study host-associated microbiome dynamics. PMID:27630625
Sampling estimators of total mill receipts for use in timber product output studies
John P. Brown; Richard G. Oderwald
2012-01-01
Data from the 2001 timber product output study for Georgia was explored to determine new methods for stratifying mills and finding suitable sampling estimators. Estimators for roundwood receipts totals comprised several types: simple random sample, ratio, stratified sample, and combined ratio. Two stratification methods were examined: the Dalenius-Hodges (DH) square...
Effectiveness of aeration and mixing in the remediation of a saline stratified river.
Lamping, Jens; Worrall, Fred; Morgan, Huw; Taylor, Sam
2005-09-15
This study examines the use of an aeration scheme to remediate low oxygen conditions in a saline stratified system. The Tawe estuary was impounded in 1992 and quickly developed saline stratification during the summer months which led to an anoxic hypolimnon. In 1998 trials began in which a suite of aerators was applied to remediate the water quality; the trial was later extended to a full aeration scheme. This study examines pre-aeration conditions in order to delineate conditions under which poor water quality would develop, and would therefore be the conditions when aeration would be necessary. Furthermore, the study compared identical periods within the impoundment during which the following conditions existed: no aeration; and aeration with first 44, then 88, aerators. The study shows that (i) destratification occurred naturally under flows of >10 m3/s, and no low dissolved oxygen conditions were observed at higher flows; (ii) the presence of all levels of aeration had a statistically significant effect upon dissolved oxygen (DO) levels; the effect of increasing the number of aerators was approximately linear; (iii) the average effect of aeration was an increase of up to 3 mg/L DO in the deepest water; (iv) the frequency of low DO conditions decreased from 19% to 3% with the operation of aerators; and (v) aeration is most effective during periods of no tidal incursion and further from the saline water source. This study is the first to demonstrate the effectiveness of aeration in a saline stratified system.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Interactive boundary delineation of agricultural lands using graphics workstations
NASA Technical Reports Server (NTRS)
Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt
1992-01-01
A review is presented of the computer-assisted stratification and sampling (CASS) system developed to delineate the boundaries of sample units for survey procedures. CASS stratifies the sampling units by land-cover and land-use type, employing image-processing software and hardware. This procedure generates coverage areas and the boundaries of stratified sampling units that are utilized for subsequent sampling procedures from which agricultural statistics are developed.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective.
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-12-01
We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model-based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. © 2015, Human Factors and Ergonomics Society.
Siriwardena, Aloysius Niroshan; Middlemass, Jo B; Ward, Kate; Wilkinson, Carol
2008-01-19
A number of protected learning time schemes have been set up in primary care across the United Kingdom but there has been little published evidence of their impact on processes of care. We undertook a qualitative study to investigate the perceptions of practitioners involved in a specific educational intervention in diabetes as part of a protected learning time scheme for primary health care teams, relating to changing processes of diabetes care in general practice. We undertook semistructured interviews of key informants from a sample of practices stratified according to the extent they had changed behaviour in prescribing of ramipril and diabetes care more generally, following a specific educational intervention in Lincolnshire, United Kingdom. Interviews sought information on facilitators and barriers to change in organisational behaviour for the care of diabetes. An interprofessional protected learning time scheme event was perceived by some but not all participants as bringing about changes in processes for diabetes care. Participants cited examples of change introduced partly as a result of the educational session. This included using ACE inhibitors as first line for patients with diabetes who developed hypertension, increased use of aspirin, switching patients to glitazones, and conversion to insulin either directly or by referral to secondary care. Other reported factors for change, unrelated to the educational intervention, included financially driven performance targets, research evidence and national guidance. Facilitators for change linked to the educational session were peer support and teamworking supported by audit and comparative feedback. This study has shown how a protected learning time scheme, using interprofessional learning, local opinion leaders and early implementers as change agents may have influenced changes in systems of diabetes care in selected practices but also how other confounding factors played an important part in changes that occurred in practice.
Design of dry sand soil stratified sampler
NASA Astrophysics Data System (ADS)
Li, Erkang; Chen, Wei; Feng, Xiao; Liao, Hongbo; Liang, Xiaodong
2018-04-01
This paper presents a design of a stratified sampler for dry sand soil, which can be used for stratified sampling of loose sand under certain conditions. Our group designed the mechanical structure of a portable, single - person, dry sandy soil stratified sampler. We have set up a mathematical model for the sampler. It lays the foundation for further development of design research.
Kamaruzaman Jusoff
2000-01-01
The objective of this paper is to assess the current timber volume by stratified sampling on a proposed plantation area. The study area is located in Gunung Rara Forest Reserve in the district of Tawau, Sabah, Malaysia.
ERIC Educational Resources Information Center
Ng'eno, J. K.; Chesimet, M. C.
2016-01-01
A sample of 300 mathematics teachers drawn from a population of 1500 participated in this study. The participants were selected using systematic random sampling and stratified random sampling (stratified by qualification and gender). The data was collected using self-report questionnaires for mathematics teachers. One tool was used to collect…
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
NASA Astrophysics Data System (ADS)
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using ESRI software (ArcGIS) extended by Hawth's Tools and later on its replacement the Geospatial Modelling Environment (GME). 88% of all desired points could actually be reached in the field and have been successfully sampled. Our results indicate that the sampled calibration and validation sets are representative for each other and could be successfully used as interpolation data for spatial prediction purposes. With respect to soil textural fractions, for instance, equal multivariate means and variance homogeneity were found for the two datasets as evidenced by significant (P > 0.05) Hotelling T²-test (2.3 with df1 = 3, df2 = 193) and Bartlett's test statistics (6.4 with df = 6). The multivariate prediction of clay, silt and sand content using a neural network residual cokriging approach reached an explained variance level of 56%, 47% and 63%. Thus, the presented case study is a successful example of considering readily available continuous information on soil forming factors such as geology and relief as stratifying variables for designing sampling schemes in digital soil mapping projects.
Robert B. Thomas; Jack Lewis
1993-01-01
Time-stratified sampling of sediment for estimating suspended load is introduced and compared to selection at list time (SALT) sampling. Both methods provide unbiased estimates of load and variance. The magnitude of the variance of the two methods is compared using five storm populations of suspended sediment flux derived from turbidity data. Under like conditions,...
Stemflow estimation in a redwood forest using model-based stratified random sampling
Jack Lewis
2003-01-01
Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Bugbee, Bruce; Gotseff, Peter
Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping tomore » select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.« less
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Catholic High Schools and Their Finances. 1986.
ERIC Educational Resources Information Center
Augenstein, John J.
This report is based on a randomly selected and stratified sample of 208 United States Catholic high schools. The sample was stratified by governance (diocesan, parochial/interparochial, and private); five categories of enrollment; and six regions. Data are compared with an earlier study, "The Catholic High School: A National Portrait" and show…
Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe
2003-11-06
We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.
Adjusting for multiple prognostic factors in the analysis of randomised trials
2013-01-01
Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993
Toma, Luiza; Stott, Alistair W; Heffernan, Claire; Ringrose, Siân; Gunn, George J
2013-03-01
The paper analyses the impact of a priori determinants of biosecurity behaviour of farmers in Great Britain. We use a dataset collected through a stratified telephone survey of 900 cattle and sheep farmers in Great Britain (400 in England and a further 250 in Wales and Scotland respectively) which took place between 25 March 2010 and 18 June 2010. The survey was stratified by farm type, farm size and region. To test the influence of a priori determinants on biosecurity behaviour we used a behavioural economics method, structural equation modelling (SEM) with observed and latent variables. SEM is a statistical technique for testing and estimating causal relationships amongst variables, some of which may be latent using a combination of statistical data and qualitative causal assumptions. Thirteen latent variables were identified and extracted, expressing the behaviour and the underlying determining factors. The variables were: experience, economic factors, organic certification of farm, membership in a cattle/sheep health scheme, perceived usefulness of biosecurity information sources, knowledge about biosecurity measures, perceived importance of specific biosecurity strategies, perceived effect (on farm business in the past five years) of welfare/health regulation, perceived effect of severe outbreaks of animal diseases, attitudes towards livestock biosecurity, attitudes towards animal welfare, influence on decision to apply biosecurity measures and biosecurity behaviour. The SEM model applied on the Great Britain sample has an adequate fit according to the measures of absolute, incremental and parsimonious fit. The results suggest that farmers' perceived importance of specific biosecurity strategies, organic certification of farm, knowledge about biosecurity measures, attitudes towards animal welfare, perceived usefulness of biosecurity information sources, perceived effect on business during the past five years of severe outbreaks of animal diseases, membership in a cattle/sheep health scheme, attitudes towards livestock biosecurity, influence on decision to apply biosecurity measures, experience and economic factors are significantly influencing behaviour (overall explaining 64% of the variance in behaviour). Three other models were run for the individual regions (England, Scotland and Wales). A smaller number of variables were included in each model to account for the smaller sample sizes. Results show lower but still high levels of variance explained for the individual models (about 40% for each country). The individual models' results are consistent with those of the total sample model. The results might suggest that ways to achieve behavioural change could include ensuring increased access of farmers to biosecurity information and advice sources. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Regional management of farmland feeding geese using an ecological prioritization tool.
Madsen, Jesper; Bjerrum, Morten; Tombre, Ingunn M
2014-10-01
Wild geese foraging on farmland cause increasing conflicts with agricultural interests, calling for a strategic approach to mitigation. In central Norway, conflicts between farmers and spring-staging pink-footed geese feeding on pastures have escalated. To alleviate the conflict, a scheme by which farmers are subsidized to allow geese to forage undisturbed was introduced. To guide allocation of subsidies, an ecological-based ranking of fields at a regional level was recommended and applied. Here we evaluate the scheme. On average, 40 % of subsidized fields were in the top 5 % of the ranking, and 80 % were within the top 20 %. Goose grazing pressure on subsidized pastures was 13 times higher compared to a stratified random selection of non-subsidized pastures, capturing 67 % of the pasture feeding geese despite that subsidized fields only comprised 13 % of the grassland area. Close dialogue between scientists and managers is regarded as a key to the success of the scheme.
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-01-01
Objective We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Background Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. Method An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. Results The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. Conclusion This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. Application The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model–based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. PMID:26169309
NASA Astrophysics Data System (ADS)
Ahmed, Zia U.; Woodbury, Peter B.; Sanderman, Jonathan; Hawke, Bruce; Jauss, Verena; Solomon, Dawit; Lehmann, Johannes
2017-02-01
To predict how land management practices and climate change will affect soil carbon cycling, improved understanding of factors controlling soil organic carbon fractions at large spatial scales is needed. We analyzed total soil organic (SOC) as well as pyrogenic (PyC), particulate (POC), and other soil organic carbon (OOC) fractions in surface layers from 650 stratified-sampling locations throughout Colorado, Kansas, New Mexico, and Wyoming. PyC varied from 0.29 to 18.0 mg C g-1 soil with a mean of 4.05 mg C g-1 soil. The mean PyC was 34.6% of the SOC and ranged from 11.8 to 96.6%. Both POC and PyC were highest in forests and canyon bottoms. In the best random forest regression model, normalized vegetation index (NDVI), mean annual precipitation (MAP), mean annual temperature (MAT), and elevation were ranked as the top four important variables determining PyC and POC variability. Random forests regression kriging (RFK) with environmental covariables improved predictions over ordinary kriging by 20 and 7% for PyC and POC, respectively. Based on RFK, 8% of the study area was dominated (≥50% of SOC) by PyC and less than 1% was dominated by POC. Furthermore, based on spatial analysis of the ratio of POC to PyC, we estimated that about 16% of the study area is medium to highly vulnerable to SOC mineralization in surface soil. These are the first results to characterize PyC and POC stocks geospatially using stratified sampling scheme at the scale of 1,000,000 km2, and the methods are scalable to other regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faccini, J.L.H.; Sampaio, P.A.B. de; Su, J.
This paper reports numerical and experimental investigation of stratified gas-liquid two-phase flow in horizontal circular pipes. The Reynolds averaged Navier Stokes equations (RANS) with the k-{omega} model for a fully developed stratified gas-liquid two-phase flow are solved by using the finite element method. A smooth and horizontal interface surface is assumed without considering the interfacial waves. The continuity of the shear stress across the interface is enforced with the continuity of the velocity being automatically satisfied by the variational formulation. For each given interface position and longitudinal pressure gradient, an inner iteration loop runs to solve the nonlinear equations. Themore » Newton-Raphson scheme is used to solve the transcendental equations by an outer iteration to determine the interface position and pressure gradient for a given pair of volumetric flow rates. The interface position in a 51.2 mm ID circular pipe was measured experimentally by the ultrasonic pulse-echo technique. The numerical results were also compared with experimental results in a 21 mm ID circular pipe reported by Masala [1]. The good agreement between the numerical and experimental results indicates that the k-{omega} model can be applied for the numerical simulation of stratified gas-liquid two-phase flow. (authors)« less
Report for Colorado: Background & Visuals, Math 2005. The Nation's Report Card
ERIC Educational Resources Information Center
Sandoval, Pam A.
2005-01-01
The National Assessment of Educational Progress (NAEP) 2005 assessment was administered to a stratified random sample of fourth-, eighth-, and twelfth-graders at the national level and to a stratified random sample of fourth- and eighth-graders at the state level. The Mathematics Framework for NAEP was revised in 1996 and again in 2005. The new…
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
Composition, biomass and structure of mangroves within the Zambezi River Delta
Carl C. Trettin; Christina E. Stringer; Stan Zarnoch
2015-01-01
We used a stratified random sampling design to inventory the mangrove vegetation within the Zambezi River Delta, Mozambique, to provide a basis for estimating biomass pools. We used canopy height, derived from remote sensing data, to stratify the inventory area, and then applied a spatial decision support system to objectively allocate sample plots among five...
Soil nutrient-landscape relationships in a lowland tropical rainforest in Panama
Barthold, F.K.; Stallard, R.F.; Elsenbeer, H.
2008-01-01
Soils play a crucial role in biogeochemical cycles as spatially distributed sources and sinks of nutrients. Any spatial patterns depend on soil forming processes, our understanding of which is still limited, especially in regards to tropical rainforests. The objective of our study was to investigate the effects of landscape properties, with an emphasis on the geometry of the land surface, on the spatial heterogeneity of soil chemical properties, and to test the suitability of soil-landscape modeling as an appropriate technique to predict the spatial variability of exchangeable K and Mg in a humid tropical forest in Panama. We used a design-based, stratified sampling scheme to collect soil samples at 108 sites on Barro Colorado Island, Panama. Stratifying variables are lithology, vegetation and topography. Topographic variables were generated from high-resolution digital elevation models with a grid size of 5 m. We took samples from five depths down to 1 m, and analyzed for total and exchangeable K and Mg. We used simple explorative data analysis techniques to elucidate the importance of lithology for soil total and exchangeable K and Mg. Classification and Regression Trees (CART) were adopted to investigate importance of topography, lithology and vegetation for the spatial distribution of exchangeable K and Mg and with the intention to develop models that regionalize the point observations using digital terrain data as explanatory variables. Our results suggest that topography and vegetation do not control the spatial distribution of the selected soil chemical properties at a landscape scale and lithology is important to some degree. Exchangeable K is distributed equally across the study area indicating that other than landscape processes, e.g. biogeochemical processes, are responsible for its spatial distribution. Lithology contributes to the spatial variation of exchangeable Mg but controlling variables could not be detected. The spatial variation of soil total K and Mg is mainly influenced by lithology. ?? 2007 Elsevier B.V. All rights reserved.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Predicting streamflow regime metrics for ungauged streamsin Colorado, Washington, and Oregon
NASA Astrophysics Data System (ADS)
Sanborn, Stephen C.; Bledsoe, Brian P.
2006-06-01
Streamflow prediction in ungauged basins provides essential information for water resources planning and management and ecohydrological studies yet remains a fundamental challenge to the hydrological sciences. A methodology is presented for stratifying streamflow regimes of gauged locations, classifying the regimes of ungauged streams, and developing models for predicting a suite of ecologically pertinent streamflow metrics for these streams. Eighty-four streamflow metrics characterizing various flow regime attributes were computed along with physical and climatic drainage basin characteristics for 150 streams with little or no streamflow modification in Colorado, Washington, and Oregon. The diverse hydroclimatology of the study area necessitates flow regime stratification and geographically independent clusters were identified and used to develop separate predictive models for each flow regime type. Multiple regression models for flow magnitude, timing, and rate of change metrics were quite accurate with many adjusted R2 values exceeding 0.80, while models describing streamflow variability did not perform as well. Separate stratification schemes for high, low, and average flows did not considerably improve models for metrics describing those particular aspects of the regime over a scheme based on the entire flow regime. Models for streams identified as 'snowmelt' type were improved if sites in Colorado and the Pacific Northwest were separated to better stratify the processes driving streamflow in these regions thus revealing limitations of geographically independent streamflow clusters. This study demonstrates that a broad suite of ecologically relevant streamflow characteristics can be accurately modeled across large heterogeneous regions using this framework. Applications of the resulting models include stratifying biomonitoring sites and quantifying linkages between specific aspects of flow regimes and aquatic community structure. In particular, the results bode well for modeling ecological processes related to high-flow magnitude, timing, and rate of change such as the recruitment of fish and riparian vegetation across large regions.
Applications of cluster analysis to satellite soundings
NASA Technical Reports Server (NTRS)
Munteanu, M. J.; Jakubowicz, O.; Kalnay, E.; Piraino, P.
1984-01-01
The advantages of the use of cluster analysis in the improvement of satellite temperature retrievals were evaluated since the use of natural clusters, which are associated with atmospheric temperature soundings characteristic of different types of air masses, has the potential for improving stratified regression schemes in comparison with currently used methods which stratify soundings based on latitude, season, and land/ocean. The method of discriminatory analysis was used. The correct cluster of temperature profiles from satellite measurements was located in 85% of the cases. Considerable improvement was observed at all mandatory levels using regression retrievals derived in the clusters of temperature (weighted and nonweighted) in comparison with the control experiment and with the regression retrievals derived in the clusters of brightness temperatures of 3 MSU and 5 IR channels.
De Boni, Raquel; do Nascimento Silva, Pedro Luis; Bastos, Francisco Inácio; Pechansky, Flavio; de Vasconcellos, Mauricio Teixeira Leite
2012-01-01
Drinking alcoholic beverages in places such as bars and clubs may be associated with harmful consequences such as violence and impaired driving. However, methods for obtaining probabilistic samples of drivers who drink at these places remain a challenge – since there is no a priori information on this mobile population – and must be continually improved. This paper describes the procedures adopted in the selection of a population-based sample of drivers who drank at alcohol selling outlets in Porto Alegre, Brazil, which we used to estimate the prevalence of intention to drive under the influence of alcohol. The sampling strategy comprises a stratified three-stage cluster sampling: 1) census enumeration areas (CEA) were stratified by alcohol outlets (AO) density and sampled with probability proportional to the number of AOs in each CEA; 2) combinations of outlets and shifts (COS) were stratified by prevalence of alcohol-related traffic crashes and sampled with probability proportional to their squared duration in hours; and, 3) drivers who drank at the selected COS were stratified by their intention to drive and sampled using inverse sampling. Sample weights were calibrated using a post-stratification estimator. 3,118 individuals were approached and 683 drivers interviewed, leading to an estimate that 56.3% (SE = 3,5%) of the drivers intended to drive after drinking in less than one hour after the interview. Prevalence was also estimated by sex and broad age groups. The combined use of stratification and inverse sampling enabled a good trade-off between resource and time allocation, while preserving the ability to generalize the findings. The current strategy can be viewed as a step forward in the efforts to improve surveys and estimation for hard-to-reach, mobile populations. PMID:22514620
The China Mental Health Survey: II. Design and field procedures.
Liu, Zhaorui; Huang, Yueqin; Lv, Ping; Zhang, Tingting; Wang, Hong; Li, Qiang; Yan, Jie; Yu, Yaqin; Kou, Changgui; Xu, Xiufeng; Lu, Jin; Wang, Zhizhong; Qiu, Hongyan; Xu, Yifeng; He, Yanling; Li, Tao; Guo, Wanjun; Tian, Hongjun; Xu, Guangming; Xu, Xiangdong; Ma, Yanjuan; Wang, Linhong; Wang, Limin; Yan, Yongping; Wang, Bo; Xiao, Shuiyuan; Zhou, Liang; Li, Lingjiang; Tan, Liwen; Chen, Hongguang; Ma, Chao
2016-11-01
China Mental Health Survey (CMHS), which was carried out from July 2013 to March 2015, was the first national representative community survey of mental disorders and mental health services in China using computer-assisted personal interview (CAPI). Face-to-face interviews were finished in the homes of respondents who were selected from a nationally representative multi-stage disproportionate stratified sampling procedure. Sample selection was integrated with the National Chronic Disease and Risk Factor Surveillance Survey administered by the National Centre for Chronic and Non-communicable Disease Control and Prevention in 2013, which made it possible to obtain both physical and mental health information of Chinese community population. One-stage design of data collection was used in the CMHS to obtain the information of mental disorders, including mood disorders, anxiety disorders, and substance use disorders, while two-stage design was applied for schizophrenia and other psychotic disorders, and dementia. A total of 28,140 respondents finished the survey with 72.9% of the overall response rate. This paper describes the survey mode, fieldwork organization, procedures, and the sample design and weighting of the CMHS. Detailed information is presented on the establishment of a new payment scheme for interviewers, results of the quality control in both stages, and evaluations to the weighting.
Spatial Sampling of Weather Data for Regional Crop Yield Simulations
NASA Technical Reports Server (NTRS)
Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian;
2016-01-01
Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management data for regional simulations of crop yields is still needed.
Doubly stratified MHD tangent hyperbolic nanofluid flow due to permeable stretched cylinder
NASA Astrophysics Data System (ADS)
Nagendramma, V.; Leelarathnam, A.; Raju, C. S. K.; Shehzad, S. A.; Hussain, T.
2018-06-01
An investigation is exhibited to analyze the presence of heat source and sink in doubly stratified MHD incompressible tangent hyperbolic fluid due to stretching of cylinder embedded in porous space under nanoparticles. To develop the mathematical model of tangent hyperbolic nanofluid, movement of Brownian and thermophoretic are accounted. The established equations of continuity, momentum, thermal and solutal boundary layers are reassembled into sets of non-linear expressions. These assembled expressions are executed with the help of Runge-Kutta scheme with MATLAB. The impacts of sundry parameters are illustrated graphically and the engineering interest physical quantities like skin friction, Nusselt and Sherwood number are examined by computing numerical values. It is clear that the power-law index parameter and curvature parameter shows favorable effect on momentum boundary layer thickness whereas Weissennberg number reveals inimical influence.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Qarri, Flora; Lazo, Pranvera; Bekteshi, Lirim; Stafilov, Trajce; Frontasyeva, Marina; Harmens, Harry
2015-02-01
The atmospheric deposition of heavy metals in Albania was investigated by using a carpet-forming moss species (Hypnum cupressiforme) as bioindicator. Sampling was done in the dry seasons of autumn 2010 and summer 2011. Two different sampling schemes are discussed in this paper: a random sampling scheme with 62 sampling sites distributed over the whole territory of Albania and systematic sampling scheme with 44 sampling sites distributed over the same territory. Unwashed, dried samples were totally digested by using microwave digestion, and the concentrations of metal elements were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and AAS (Cd and As). Twelve elements, such as conservative and trace elements (Al and Fe and As, Cd, Cr, Cu, Ni, Mn, Pb, V, Zn, and Li), were measured in moss samples. Li as typical lithogenic element is also included. The results reflect local emission points. The median concentrations and statistical parameters of elements were discussed by comparing two sampling schemes. The results of both sampling schemes are compared with the results of other European countries. Different levels of the contamination valuated by the respective contamination factor (CF) of each element are obtained for both sampling schemes, while the local emitters identified like iron-chromium metallurgy and cement industry, oil refinery, mining industry, and transport have been the same for both sampling schemes. In addition, the natural sources, from the accumulation of these metals in mosses caused by metal-enriched soil, associated with wind blowing soils were pointed as another possibility of local emitting factors.
Why sampling scheme matters: the effect of sampling scheme on landscape genetic results
Michael K. Schwartz; Kevin S. McKelvey
2008-01-01
There has been a recent trend in genetic studies of wild populations where researchers have changed their sampling schemes from sampling pre-defined populations to sampling individuals uniformly across landscapes. This reflects the fact that many species under study are continuously distributed rather than clumped into obvious "populations". Once individual...
Abellán Alemán, José; Zafrilla Rentero, María Pilar; Montoro-García, Silvia; Mulero, Juana; Pérez Garrido, Alfonso; Leal, Mariano; Guerrero, Lucía; Ramos, Elena; Ruilope, Luis Miguel
2016-10-28
Nutritional studies focus on traditional cultural models and lifestyles in different countries. The aim of this study was to examine the adherence to the Mediterranean diet, life habits, and risk factors associated with cardiovascular diseases among people living in different geographical regions in Spain. A descriptive cross-sectional study was conducted in each region. The sampling scheme consisted of a random three-stage stratified sampling program according to geographic region, age, and gender. A total of 1732 subjects were asked to complete a questionnaire designed to assess their nutrient intake, dietary habits, and exercise. A diet score that assesses the adherence of participants to the Mediterranean diet (range 0-10) was also applied. Southeastern Spain had the lowest score for adherence to the Mediterranean diet because of the low consumption of fish and plant products. A lower adherence score to the Mediterranean diet was strongly associated with the prevalence of hypertension ( p = 0.018). A low level of adherence to the Mediterranean diet is accompanied by a high prevalence of hypertension and, therefore, a raised cardiovascular risk in the country. The adherence score could help identify individuals at greater cardiovascular risk.
Abellán Alemán, José; Zafrilla Rentero, María Pilar; Montoro-García, Silvia; Mulero, Juana; Pérez Garrido, Alfonso; Leal, Mariano; Guerrero, Lucía; Ramos, Elena; Ruilope, Luis Miguel
2016-01-01
Background: Nutritional studies focus on traditional cultural models and lifestyles in different countries. The aim of this study was to examine the adherence to the Mediterranean diet, life habits, and risk factors associated with cardiovascular diseases among people living in different geographical regions in Spain. Methods: A descriptive cross-sectional study was conducted in each region. The sampling scheme consisted of a random three-stage stratified sampling program according to geographic region, age, and gender. A total of 1732 subjects were asked to complete a questionnaire designed to assess their nutrient intake, dietary habits, and exercise. A diet score that assesses the adherence of participants to the Mediterranean diet (range 0–10) was also applied. Results: Southeastern Spain had the lowest score for adherence to the Mediterranean diet because of the low consumption of fish and plant products. A lower adherence score to the Mediterranean diet was strongly associated with the prevalence of hypertension (p = 0.018). Conclusions: A low level of adherence to the Mediterranean diet is accompanied by a high prevalence of hypertension and, therefore, a raised cardiovascular risk in the country. The adherence score could help identify individuals at greater cardiovascular risk. PMID:27801819
Training set optimization under population structure in genomic selection.
Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E
2015-01-01
Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Lang, Stephen; Chern, Jiundar; Peters-Lidard, Christa; Fridlind, Ann; Matsui, Toshihisa
2015-01-01
The Goddard microphysics scheme was recently improved by adding a 4th ice class (frozen dropshail). This new 4ICE scheme was implemented and tested in the Goddard Cumulus Ensemble model (GCE) for an intense continental squall line and a moderate,less-organized continental case. Simulated peak radar reflectivity profiles were improved both in intensity and shape for both cases as were the overall reflectivity probability distributions versus observations. In this study, the new Goddard 4ICE scheme is implemented into the regional-scale NASA Unified - Weather Research and Forecasting model (NU-WRF) and tested on an intense mesoscale convective system that occurred during the Midlatitude Continental Convective Clouds Experiment (MC3E). The NU42WRF simulated radar reflectivities, rainfall intensities, and vertical and horizontal structure using the new 4ICE scheme agree as well as or significantly better with observations than when using previous versions of the Goddard 3ICE (graupel or hail) schemes. In the 4ICE scheme, the bin microphysics-based rain evaporation correction produces more erect convective cores, while modification of the unrealistic collection of ice by dry hail produces narrow and intense cores, allowing more slow-falling snow to be transported rearward. Together with a revised snow size mapping, the 4ICE scheme produces a more horizontally stratified trailing stratiform region with a broad, more coherent light rain area. In addition, the NU-WRF 4ICE simulated radar reflectivity distributions are consistent with and generally superior to those using the GCE due to the less restrictive open lateral boundaries
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
Yao, Qiang; Liu, Chaojie; Ferrier, J Adamm; Liu, Zhiyong; Sun, Ju
2015-07-30
To assess the impact of the National Essential Medicines Scheme (NEMS) with respect to urban-rural inequalities regarding drug prescriptions in primary care facilities. A stratified two-stage random sampling strategy was used to sample 23,040 prescriptions from 192 primary care facilities from 2009 to 2010. Difference-in-Difference (DID) analyses were performed to test the association between NEMS and urban-rural gaps in prescription patterns. Between-Group Variance and Theil Index were calculated to measure urban-rural absolute and relative disparities in drug prescriptions. The use of the Essential Medicines List (EML) achieved a compliance rate of up to 90% in both urban and rural facilities. An overall reduction of average prescription cost improved economic access to drugs for patients in both areas. However, we observed an increased urban-rural disparity in average expenditure per prescription. The rate of antibiotics and glucocorticoids prescription remained high, despite a reduced disparity between urban and rural facilities. The average incidence of antibiotic prescription increased slightly in urban facilities (62 to 63%) and reduced in rural facilities (67% to 66%). The urban-rural disparity in the use of parenteral administration (injections and infusions) increased, albeit at a high level in both areas (44%-52%). NEMS interventions are effective in reducing the overall average prescription costs. Despite the increased use of the EML, indicator performances with respect to rational drug prescribing and use remain poor and exceed the WHO/INRUD recommended cutoff values and worldwide benchmarks. There is an increased gap between urban and rural areas in the use of parenteral administration and expenditure per prescription.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Simulation of the West African Monsoon using the MIT Regional Climate Model
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Gianotti, Rebecca L.; Eltahir, Elfatih A. B.
2013-04-01
We test the performance of the MIT Regional Climate Model (MRCM) in simulating the West African Monsoon. MRCM introduces several improvements over Regional Climate Model version 3 (RegCM3) including coupling of Integrated Biosphere Simulator (IBIS) land surface scheme, a new albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Using MRCM, we carried out a series of experiments implementing two different land surface schemes (IBIS and BATS) and three convection schemes (Grell with the Fritsch-Chappell closure, standard Emanuel, and modified Emanuel that includes the new convective cloud scheme). Our analysis primarily focused on comparing the precipitation characteristics, surface energy balance and large scale circulations against various observations. We document a significant sensitivity of the West African monsoon simulation to the choices of the land surface and convection schemes. In spite of several deficiencies, the simulation with the combination of IBIS and modified Emanuel schemes shows the best performance reflected in a marked improvement of precipitation in terms of spatial distribution and monsoon features. In particular, the coupling of IBIS leads to representations of the surface energy balance and partitioning that are consistent with observations. Therefore, the major components of the surface energy budget (including radiation fluxes) in the IBIS simulations are in better agreement with observation than those from our BATS simulation, or from previous similar studies (e.g Steiner et al., 2009), both qualitatively and quantitatively. The IBIS simulations also reasonably reproduce the dynamical structure of vertically stratified behavior of the atmospheric circulation with three major components: westerly monsoon flow, African Easterly Jet (AEJ), and Tropical Easterly Jet (TEJ). In addition, since the modified Emanuel scheme tends to reduce the precipitation amount, it improves the precipitation over regions suffering from systematic wet bias.
Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield
Robert B. Thomas
1986-01-01
Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...
A voting-based star identification algorithm utilizing local and global distribution
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
2011-06-03
Permutationalmultivariate analysis of variance ( PerMANOVA ; McArdle and Anderson, 2001) was used to test hypotheses regard- ing regions and invasion level...for the differences due to invasion level after removing any differences due to regions, soil texture, and habitat. The null distribution for PerMANOVA ...soil neigh- borhoods, PerMANOVA tests were carried out separately for each site. We did not use a stratified randomization scheme for these tests, under
Acoustic sounding of wind velocity profiles in a stratified moving atmosphere.
Ostashev, V E; Georges, T M; Clifford, S F; Goedecke, G H
2001-06-01
The paper deals with analytical and numerical studies of the effects of atmospheric stratification on acoustic remote sensing of wind velocity profiles by sodars. Both bistatic and monostatic schemes are considered. Formulas for the Doppler shift of an acoustic echo signal scattered by atmospheric turbulence advected with the mean wind in a stratified moving atmosphere are derived. Numerical studies of these formulas show that errors in retrieving wind velocity can be of the order of 1 m/s if atmospheric stratification is ignored. Formulas for the height at which wind velocity is retrieved are also derived. Approaches are proposed which allow one to take into account the effects of atmospheric stratification when restoring the wind velocity profile from measured values of the Doppler shift and the time interval of acoustic impulse propagation from a sodar to the scattering volume and back to the ground.
Clarke, Diana E; Narrow, William E; Regier, Darrel A; Kuramoto, S Janet; Kupfer, David J; Kuhl, Emily A; Greiner, Lisa; Kraemer, Helena C
2013-01-01
This article discusses the design,sampling strategy, implementation,and data analytic processes of the DSM-5 Field Trials. The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management.Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability (i.e., intraclass kappa for stratified samples) and other recently developed measures to determine confidence intervals around kappa estimates. Diagnostic interviews using DSM-5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5.Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled,of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures. Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.
Using known map category marginal frequencies to improve estimates of thematic map accuracy
NASA Technical Reports Server (NTRS)
Card, D. H.
1982-01-01
By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.
Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction
2016-01-01
1 Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction William F. Moulder, James D. Krieger, Denise T. Maurais-Galejs, Huy...described and validated experimentally with the formation of high quality microwave images. It is further shown that the scheme is more than two orders of... scheme (wherein transmitters and receivers are co-located) which require NTNR transmit-receive elements to achieve the same sampling. The second
Effect of different sampling schemes on the spatial placement of conservation reserves in Utah, USA
Bassett, S.D.; Edwards, T.C.
2003-01-01
We evaluated the effect of three different sampling schemes used to organize spatially explicit biological information had on the spatial placement of conservation reserves in Utah, USA. The three sampling schemes consisted of a hexagon representation developed by the EPA/EMAP program (statistical basis), watershed boundaries (ecological), and the current county boundaries of Utah (socio-political). Four decision criteria were used to estimate effects, including amount of area, length of edge, lowest number of contiguous reserves, and greatest number of terrestrial vertebrate species covered. A fifth evaluation criterion was the effect each sampling scheme had on the ability of the modeled conservation reserves to cover the six major ecoregions found in Utah. Of the three sampling schemes, county boundaries covered the greatest number of species, but also created the longest length of edge and greatest number of reserves. Watersheds maximized species coverage using the least amount of area. Hexagons and watersheds provide the least amount of edge and fewest number of reserves. Although there were differences in area, edge and number of reserves among the sampling schemes, all three schemes covered all the major ecoregions in Utah and their inclusive biodiversity. ?? 2003 Elsevier Science Ltd. All rights reserved.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Interpreting PCB concentration measurements resulting from this sampling scheme. 761.316 Section 761.316 Protection of Environment... scheme. (a) For an individual sample taken from an approximately 1 meter square portion of the entire...
Städler, Thomas; Haubold, Bernhard; Merino, Carlos; Stephan, Wolfgang; Pfaffelhuber, Peter
2009-01-01
Using coalescent simulations, we study the impact of three different sampling schemes on patterns of neutral diversity in structured populations. Specifically, we are interested in two summary statistics based on the site frequency spectrum as a function of migration rate, demographic history of the entire substructured population (including timing and magnitude of specieswide expansions), and the sampling scheme. Using simulations implementing both finite-island and two-dimensional stepping-stone spatial structure, we demonstrate strong effects of the sampling scheme on Tajima's D (DT) and Fu and Li's D (DFL) statistics, particularly under specieswide (range) expansions. Pooled samples yield average DT and DFL values that are generally intermediate between those of local and scattered samples. Local samples (and to a lesser extent, pooled samples) are influenced by local, rapid coalescence events in the underlying coalescent process. These processes result in lower proportions of external branch lengths and hence lower proportions of singletons, explaining our finding that the sampling scheme affects DFL more than it does DT. Under specieswide expansion scenarios, these effects of spatial sampling may persist up to very high levels of gene flow (Nm > 25), implying that local samples cannot be regarded as being drawn from a panmictic population. Importantly, many data sets on humans, Drosophila, and plants contain signatures of specieswide expansions and effects of sampling scheme that are predicted by our simulation results. This suggests that validating the assumption of panmixia is crucial if robust demographic inferences are to be made from local or pooled samples. However, future studies should consider adopting a framework that explicitly accounts for the genealogical effects of population subdivision and empirical sampling schemes. PMID:19237689
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
Association between poverty and psychiatric disability among Chinese population aged 15-64 years.
Li, Ning; Pang, Lihua; Du, Wei; Chen, Gong; Zheng, Xiaoying
2012-12-30
Psychiatric disability is an important public health problem in China, and poverty may be positively correlated with disability. Little study in the existing literatures has explored the contribution of poverty to the psychiatric disability among Chinese population. Using a nationally representative data, this paper aims to investigate the association between poverty and psychiatric disability in Chinese population aged 15-64 years. We used the second China National Sample Survey on Disability, comprising 1.8 million people aged 15-64 years. Identification and classification for psychiatric disability was based on consensus manuals. We used standard weighting procedures to construct sample weights considering the multistage stratified cluster sampling survey scheme. Population weighted numbers, weighted proportions, and the adjusted Odd Ratios (OR) were calculated. For people with psychiatric disability aged 15-64 years, more than 4 million were below the poverty level in China. After controlling for other demographic variables, poverty was found to be significantly associated with psychiatric disability (OR=2.25, 95% Confidence Interval (CI) 2.15-2.35). Given China is undergoing rapid social-economic transition and psychiatric diseases become a leading burden to the individuals, community, and health care systems, poverty reduction programs are warranted to prevent psychiatric disability and/or improve the lives for persons with psychiatric disability. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
Rapid evaluation of high-performance systems
NASA Astrophysics Data System (ADS)
Forbes, G. W.; Ruoff, J.
2017-11-01
System assessment for design often involves averages, such as rms wavefront error, that are estimated by ray tracing through a sample of points within the pupil. Novel general-purpose sampling and weighting schemes are presented and it is also shown that optical design can benefit from tailored versions of these schemes. It turns out that the type of Gaussian quadrature that has long been recognized for efficiency in this domain requires about 40-50% more ray tracing to attain comparable accuracy to generic versions of the new schemes. Even greater efficiency gains can be won, however, by tailoring such sampling schemes to the optical context where azimuthal variation in the wavefront is generally weaker than the radial variation. These new schemes are special cases of what is known in the mathematical world as cubature. Our initial results also led to the consideration of simpler sampling configurations that approximate the newfound cubature schemes. We report on the practical application of a selection of such schemes and make observations that aid in the discovery of novel cubature schemes relevant to optical design of systems with circular pupils.
A Novel, Simplified Scheme for Plastics Identification: "JCE" Classroom Activity 104
ERIC Educational Resources Information Center
Harris, Mary E.; Walker, Barbara
2010-01-01
In this Activity, students identify samples of seven types of recyclable plastic by using a flowchart scheme. The flowchart procedure includes making density comparisons of the plastic samples in water and alcohol and observing physical changes of plastic samples subjected to boiling water temperatures and exposure to acetone. This scheme is…
Forest inventory and stratified estimation: a cautionary note
John Coulston
2008-01-01
The Forest Inventory and Analysis (FIA) Program uses stratified estimation techniques to produce estimates of forest attributes. Stratification must be unbiased and stratification procedures should be examined to identify any potential bias. This note explains simple techniques for identifying potential bias, discriminating between sample bias and stratification bias,...
NASA Technical Reports Server (NTRS)
Card, Don H.; Strong, Laurence L.
1989-01-01
An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.
Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging
Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin
2018-01-01
Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander
2016-11-21
Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.
Language Learning Motivation in China: Results of a Large-Scale Stratified Survey
ERIC Educational Resources Information Center
You, Chenjing; Dörnyei, Zoltán
2016-01-01
This article reports on the findings of a large-scale cross-sectional survey of the motivational disposition of English language learners in secondary schools and universities in China. The total sample involved over 10,000 students and was stratified according to geographical region and teaching contexts, selecting participants both from urban…
2014-01-01
Introduction Health system reforms are undertaken with the aim of improving equity of access to health care. Their impact is generally analyzed based on health care utilization, without distinguishing between levels of care. This study aims to analyze inequities in access to the continuum of care in municipalities of Brazil and Colombia. Methods A cross-sectional study was conducted based on a survey of a multistage probability sample of people who had had at least one health problem in the prior three months (2,163 in Colombia and 2,167 in Brazil). The outcome variables were dichotomous variables on the utilization of curative and preventive services. The main independent variables were income, being the holder of a private health plan and, in Colombia, type of insurance scheme of the General System of Social Security in Health (SGSSS). For each country, the prevalence of the outcome variables was calculated overall and stratified by levels of per capita income, SGSSS insurance schemes and private health plan. Prevalence ratios were computed by means of Poisson regression models with robust variance, controlling for health care need. Results There are inequities in favor of individuals of a higher socioeconomic status: in Colombia, in the three different care levels (primary, outpatient secondary and emergency care) and preventive activities; and in Brazil, in the use of outpatient secondary care services and preventive activities, whilst lower-income individuals make greater use of the primary care services. In both countries, inequity in the use of outpatient secondary care is more pronounced than in the other care levels. Income in both countries, insurance scheme enrollment in Colombia and holding a private health plan in Brazil all contribute to the presence of inequities in utilization. Conclusions Twenty years after the introduction of reforms implemented to improve equity in access to health care, inequities, defined in terms of unequal use for equal need, are still present in both countries. The design of the health systems appears to determine access to the health services: two insurance schemes in Colombia with different benefits packages and a segmented system in Brazil, with a significant private component. PMID:24479581
2013-01-01
Background The use of incentives to promote smoking cessation is a promising technique for increasing the effectiveness of interventions. This study evaluated the smoking cessation outcomes and factors associated with success for pregnant smokers who registered with a pilot incentivised smoking cessation scheme in a Scottish health board area (NHS Tayside). Methods All pregnant smokers who engaged with the scheme between March 2007 and December 2009 were included in the outcome evaluation which used routinely collected data. Data utilised included: the Scottish National Smoking Cessation Dataset; weekly and periodic carbon monoxide (CO) breath tests; status of smoking cessation quit attempts; and amount of incentive paid. Process evaluation incorporated in-depth interviews with a cross-sectional sample of service users, stratified according to level of engagement. Results Quit rates for those registering with Give It Up For Baby were 54% at 4 weeks, 32% at 12 weeks and 17% at 3 months post partum (all data validated by CO breath test). Among the population of women identified as smoking at first booking over a one year period, 20.1% engaged with Give It Up For Baby, with 7.8% of pregnant smokers quit at 4 weeks. Pregnant smokers from more affluent areas were more successful with their quit attempt. The process evaluation indicates financial incentives can encourage attendance at routine advisory sessions where they are seen to form part of a wider reward structure, but work less well with those on lowest incomes who demonstrate high reliance on the financial reward. Conclusions Uptake of Give It Up For Baby by the target population was higher than for all other health board areas offering specialist or equivalent cessation services in Scotland. Quit successes also compared favorably with other specialist interventions, adding to evidence of the benefits of incentives in this setting. The process evaluation helped to explain variations in retention and quit rates achieved by the scheme. This study describes a series of positive outcomes achieved through the use of incentives to promote smoking cessation amongst pregnant smokers. PMID:23587161
Radley, Andrew; Ballard, Paul; Eadie, Douglas; MacAskill, Susan; Donnelly, Louise; Tappin, David
2013-04-15
The use of incentives to promote smoking cessation is a promising technique for increasing the effectiveness of interventions. This study evaluated the smoking cessation outcomes and factors associated with success for pregnant smokers who registered with a pilot incentivised smoking cessation scheme in a Scottish health board area (NHS Tayside). All pregnant smokers who engaged with the scheme between March 2007 and December 2009 were included in the outcome evaluation which used routinely collected data. Data utilised included: the Scottish National Smoking Cessation Dataset; weekly and periodic carbon monoxide (CO) breath tests; status of smoking cessation quit attempts; and amount of incentive paid. Process evaluation incorporated in-depth interviews with a cross-sectional sample of service users, stratified according to level of engagement. Quit rates for those registering with Give It Up For Baby were 54% at 4 weeks, 32% at 12 weeks and 17% at 3 months post partum (all data validated by CO breath test). Among the population of women identified as smoking at first booking over a one year period, 20.1% engaged with Give It Up For Baby, with 7.8% of pregnant smokers quit at 4 weeks. Pregnant smokers from more affluent areas were more successful with their quit attempt. The process evaluation indicates financial incentives can encourage attendance at routine advisory sessions where they are seen to form part of a wider reward structure, but work less well with those on lowest incomes who demonstrate high reliance on the financial reward. Uptake of Give It Up For Baby by the target population was higher than for all other health board areas offering specialist or equivalent cessation services in Scotland. Quit successes also compared favorably with other specialist interventions, adding to evidence of the benefits of incentives in this setting. The process evaluation helped to explain variations in retention and quit rates achieved by the scheme.This study describes a series of positive outcomes achieved through the use of incentives to promote smoking cessation amongst pregnant smokers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, R.O.; Eberhardt, L.L.; Fowler, E.B.
This paper is centered around the use of stratified random sampling for estimating the total amount (inventory) of $sup 239-240$Pu and uranium in surface soil at ten ''safety-shot'' sites on the Nevada Test Site (NTS) and Tonopah Test Range (TTR) that are currently being studied by the Nevada Applied Ecology Group (NAEG). The use of stratified random sampling has resulted in estimates of inventory at these desert study sites that have smaller standard errors than would have been the case had simple random sampling (no stratification) been used. Estimates of inventory are given for $sup 235$U, $sup 238$U, and $supmore » 239-240$Pu in soil at A Site of Area 11 on the NTS. Other results presented include average concentrations of one or more of these isotopes in soil and vegetation and in soil profile samples at depths to 25 cm. The regression relationship between soil and vegetation concentrations of $sup 235$U and $sup 238$U at adjacent sampling locations is also examined using three different models. The applicability of stratified random sampling to the estimation of concentration contours of $sup 239-240$Pu in surface soil using computer algorithms is also investigated. Estimates of such contours are obtained using several different methods. The planning of field sampling plans for estimating inventory and distribution is discussed. (auth)« less
Numerical analysis of internal waves in stratified wake flows
NASA Astrophysics Data System (ADS)
Fraunie, Philppe
2014-05-01
In laboratory investigations, increased attention has been given to internal waves generated by stationary placed oscillating sources and moving bodies in stratified fluids [1]. The main attention was paid to study flows past bodies of perfect shapes like sphere [2], cylinder [3] of thin strip [3] which are the best theoretical (analytical or numerical) studies. Due to simplicity of geometry, flow around a strip has a potential to investigate separately effects of a drag and lift forces on the body by changing the slope of the horizontally moving strip which can be placed vertically [1], horizontally [2], or be tilted under some angle to the direction of towing velocity [5]. Numeric modeling of a flow past vertical strip uniformly towing with permanent velocity in horizontal direction in a linearly stratified talk which was based on a finite differences solver adapted to the low Reynolds Navier-Stokes equation with transport equation for salinity (LES simulation [6] and RANS [7]) has demonstrated reasonable agreement with data of Schlieren visualization, density marker and probe measurements of internal wave fields. The chosen test cases allowed demonstrating the ability of selected numerical methods to represent stably stratified flows over horizontal strip [4] and hill type 2D obstacles [1, 3] with generation of internal waves. ACKNOWLEDGMENTS This research work was supported by the Region Provence Alpes Côte d'Azur - Modtercom project. The work was also supported by the Russian Foundation for Basic Research (grant 12-01-00128). REFERENCES [1] Chashechkin Yu.D., Mitkin V.V. Experimental study of a fine structure of 2D wakes and mixing past an obstacle in a continuously stratified fluid // Dynamics of Atmosphere and Oceans. 2001. V. 34. P. 165-187. [2] Chashechkin, Yu. D. Hydrodynamics of a sphere in a stratified fluid // Fluid Dyn. 1989. V.24(1) P. 1-7. [3] Mitkin V. V., Chashechkin Yu. D. Transformation of hanging discontinuities into vortex systems in a stratified flow behind a cylinder // 2007. Fluid Dyn. V. 42 (1). P. 12-23. [4] Bardakov R. N., Mitkin V. V., Chashechkin Yu. D. Fine structure of a stratified flow near a flat-plate surface // J. Appl. Mech. Tech. Phys. 2007. V. 48(6) P. 840-851. [5] Chashechkin Yu. D., Mitkin V. V. An effect of a lift force on the structure of attached internal waves in a continuously stratified fluid // Dokl. Phys. 2001. V. 46 (6). P. 425-428. [6] Houcine H., Chashechkin Yu.D, Fraunié P., Fernando H.J.S., Gharbi A., Lili T. Numerical modeling of the generation of internal waves by uniform stratified flow over a thin vertical barrier // Int J. Num Methods in Fluids. 2012. V.68(4). P. 451-466. DOI: 10.1002/fld.2513 [7] Bodnar T., Benes , Fraunié P., Kozel K.. Application of Compact Finite-Difference Schemes to Simulations of Stably Stratified Fluid Flows. Applied Mathematics and Computation 219 : 3336-3353 2012. doi:10.1016/j.amc.2011.08.058
Pivette, M; Auvigne, V; Guérin, P; Mueller, J E
2017-04-01
The aim of this study was to describe a tool based on vaccine sales to estimate vaccination coverage against seasonal influenza in near real-time in the French population aged 65 and over. Vaccine sales data available on sale-day +1 came from a stratified sample of 3004 pharmacies in metropolitan France. Vaccination coverage rates were estimated between 2009 and 2014 and compared with those obtained based on vaccination refund data from the general health insurance scheme. The seasonal vaccination coverage estimates were highly correlated with those obtained from refund data. They were also slightly higher, which can be explained by the inclusion of non-reimbursed vaccines and the consideration of all individuals aged 65 and over. We have developed an online tool that provides estimates of daily vaccination coverage during each vaccination campaign. The developed tool provides a reliable and near real-time estimation of vaccination coverage among people aged 65 and over. It can be used to evaluate and adjust public health messages. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Lin, Shuai-Chun; Pasquale, Louis R; Singh, Kuldev; Lin, Shan C
2018-03-01
The purpose of this article is to investigate the association between body mass index (BMI) and open-angle glaucoma (OAG) in a sample of the South Korean population. The sample consisted of a cross-sectional, population-based sample of 10,978 participants, 40 years of age and older, enrolled in the 2008 to 2011 Korean National Health and Nutrition Examination Survey. All participants had measured intraocular pressure <22 mm Hg and open anterior chamber angles. OAG was defined using disc and visual field criteria established by the International Society for Geographical and Epidemiological Ophthalmology. Multivariable analyses were performed to determine the association between BMI and OAG. These analyses were also performed in a sex-stratified and age-stratified manner. After adjusting for potential confounding variables, lower BMI (<19 kg/m) was associated with greater risk of OAG compared with normal BMI (19 to 24.9 kg/m) [odds ratio (OR), 2.28; 95% confidence interval (CI), 1.22-4.26]. In sex-stratified analyses, low BMI remained adversely related to glaucoma in women (OR, 3.45; 95% CI, 1.42-8.38) but not in men (OR, 1.72; 95% CI, 0.71-4.20). In age-stratified analyses, lower BMI was adversely related to glaucoma among subjects 40- to 49-year old (OR, 5.16; 95% CI, 1.86-14.36) but differences in glaucoma prevalence were not statistically significant between those with low versus normal BMI in other age strata. Lower BMI was associated with increased odds of OAG in a sample of the South Korean population. Multivariate analysis revealed the association to be statistically significant in women and those in the youngest age stratum.
SWIFT: SPH With Inter-dependent Fine-grained Tasking
NASA Astrophysics Data System (ADS)
Schaller, Matthieu; Gonnet, Pedro; Chalk, Aidan B. G.; Draper, Peter W.
2018-05-01
SWIFT runs cosmological simulations on peta-scale machines for solving gravity and SPH. It uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles, combining these with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles. Many useful external potentials are also available, such as galaxy haloes or stratified boxes that are used in idealised problems. SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame; equations of state of dark-energy evolve with scale-factor. The structure of the code allows implementation for modified-gravity solvers or self-interacting dark matter schemes to be implemented. Many hydrodynamics schemes are implemented in SWIFT and the software allows users to add their own.
Individualized treatment in stage IVC nasopharyngeal carcinoma.
Chan, Oscar S H; Ngan, Roger K C
2014-09-01
The stage IVC nasopharyngeal carcinoma is a catch-all entity covering minute solitary metastasis to bulky disseminated disease. Prognosis varies greatly within this stage group. A subset of patients with oligometastases may benefit from aggressive local ablative therapy. Meanwhile, in multiple metastatic diseases, customizing conventional cytotoxics basing on individual tumor characteristics and previous chemotherapy responses can be a new direction to improve therapeutic results. Prognostic models built on clinical features and genomic profiles can be utilized to stratify different risk groups and tailor therapy schemes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling the tides of Massachusetts and Cape Cod Bays
Jenter, H.L.; Signell, R.P.; Blumberg, A.F.; ,
1993-01-01
A time-dependent, three-dimensional numerical modeling study of the tides of Massachusetts and Cape Code Bays, motivated by construction of a new sewage treatment plant and ocean outfall for the city of Boston, has been undertaken by the authors. The numerical model being used is a hybrid version of the Blumberg and Mellor ECOM3D model, modified to include a semi-implicit time-stepping scheme and transport of a non-reactive dissolved constituent. Tides in the bays are dominated by the semi-diurnal frequencies, in particular by the M2 tide, due to the resonance of these frequencies in the Gulf of Maine. The numerical model reproduces, well, measured tidal ellipses in unstratified wintertime conditions. Stratified conditions present more of a problem because tidal-frequency internal wave generation and propagation significantly complicates the structure of the resulting tidal field. Nonetheless, the numerical model reproduces qualitative aspects of the stratified tidal flow that are consistent with observations in the bays.
Aronoff, Justin M; Yoon, Yang-soo; Soli, Sigfrid D
2010-06-01
Stratified sampling plans can increase the accuracy and facilitate the interpretation of a dataset characterizing a large population. However, such sampling plans have found minimal use in hearing aid (HA) research, in part because of a paucity of quantitative data on the characteristics of HA users. The goal of this study was to devise a quantitatively derived stratified sampling plan for HA research, so that such studies will be more representative and generalizable, and the results obtained using this method are more easily reinterpreted as the population changes. Pure-tone average (PTA) and age information were collected for 84,200 HAs acquired in 2006 and 2007. The distribution of PTA and age was quantified for each HA type and for a composite of all HA users. Based on their respective distributions, PTA and age were each divided into three groups, the combination of which defined the stratification plan. The most populous PTA and age group was also subdivided, allowing greater homogeneity within strata. Finally, the percentage of users in each stratum was calculated. This article provides a stratified sampling plan for HA research, based on a quantitative analysis of the distribution of PTA and age for HA users. Adopting such a sampling plan will make HA research results more representative and generalizable. In addition, data acquired using such plans can be reinterpreted as the HA population changes.
Wang, Jing; Chen, Lina; Ye, Ting; Zhang, Zhiguo; Ma, Jingdong
2014-07-15
Several years have passed since the rural New Cooperative Medical Scheme (NCMS) in China was established and policies kept continuous improvement. Its policies on chronic diseases vary by county but have certain shared characteristics. Following this modification of medical insurance policy, this study reassesses the provision of insurance against expenditure on chronic diseases in rural areas, and analyzes its effect on impoverishment. We conducted an empirical study using multi-stage stratified random sampling. We surveyed 1,661 rural households in three provinces and analyzed the responses from 1,525 households that participated in NCMS, using descriptive and logistic regression analysis. The NCMS has reduced the prevalence of poverty and catastrophic health expenditure (CHE), as measured by out-of-pocket (OOP) payments exceeding 40% of total household expenditure, by decreasing medical expenditure. It provides obvious protection to households which include someone with chronic diseases. However, these households continue to face a higher financial risk than those without anyone suffering from chronic diseases. Variables about health service utilization and OOP payment differed significantly between households with or without people suffering from chronic disease. And CHE risk is commonly associated with household income, the number of family members with chronic diseases, OOP payment of outpatient and inpatient service in all three provinces. To reduce CHE risk for these households, it is critical to decrease OOP payments for health services by enhancing the effective reimbursement level of NCMS and strictly regulating the providers' behaviors. We recommend that a combinatory changes should be made to the rural health insurance scheme in China to improve its effect. These include improving the NCMS benefit package by broadening the catalogue of drugs and treatments covered, decreasing or abolishing deductible and increasing the reimbursement ratio of outpatient services for people with chronic diseases, together with expansion of insurance fund, and modifying health providers' behaviors by payment reform.
Zhu, Wensheng; Yuan, Ying; Zhang, Jingwen; Zhou, Fan; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-02-01
The aim of this paper is to systematically evaluate a biased sampling issue associated with genome-wide association analysis (GWAS) of imaging phenotypes for most imaging genetic studies, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Specifically, the original sampling scheme of these imaging genetic studies is primarily the retrospective case-control design, whereas most existing statistical analyses of these studies ignore such sampling scheme by directly correlating imaging phenotypes (called the secondary traits) with genotype. Although it has been well documented in genetic epidemiology that ignoring the case-control sampling scheme can produce highly biased estimates, and subsequently lead to misleading results and suspicious associations, such findings are not well documented in imaging genetics. We use extensive simulations and a large-scale imaging genetic data analysis of the Alzheimer's Disease Neuroimaging Initiative (ADNI) data to evaluate the effects of the case-control sampling scheme on GWAS results based on some standard statistical methods, such as linear regression methods, while comparing it with several advanced statistical methods that appropriately adjust for the case-control sampling scheme. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Clunie, David A.
2000-05-01
Proprietary compression schemes have a cost and risk associated with their support, end of life and interoperability. Standards reduce this cost and risk. The new JPEG-LS process (ISO/IEC 14495-1), and the lossless mode of the proposed JPEG 2000 scheme (ISO/IEC CD15444-1), new standard schemes that may be incorporated into DICOM, are evaluated here. Three thousand, six hundred and seventy-nine (3,679) single frame grayscale images from multiple anatomical regions, modalities and vendors, were tested. For all images combined JPEG-LS and JPEG 2000 performed equally well (3.81), almost as well as CALIC (3.91), a complex predictive scheme used only as a benchmark. Both out-performed existing JPEG (3.04 with optimum predictor choice per image, 2.79 for previous pixel prediction as most commonly used in DICOM). Text dictionary schemes performed poorly (gzip 2.38), as did image dictionary schemes without statistical modeling (PNG 2.76). Proprietary transform based schemes did not perform as well as JPEG-LS or JPEG 2000 (S+P Arithmetic 3.4, CREW 3.56). Stratified by modality, JPEG-LS compressed CT images (4.00), MR (3.59), NM (5.98), US (3.4), IO (2.66), CR (3.64), DX (2.43), and MG (2.62). CALIC always achieved the highest compression except for one modality for which JPEG-LS did better (MG digital vendor A JPEG-LS 4.02, CALIC 4.01). JPEG-LS outperformed existing JPEG for all modalities. The use of standard schemes can achieve state of the art performance, regardless of modality, JPEG-LS is simple, easy to implement, consumes less memory, and is faster than JPEG 2000, though JPEG 2000 will offer lossy and progressive transmission. It is recommended that DICOM add transfer syntaxes for both JPEG-LS and JPEG 2000.
Dennis, Jessica; Medina-Rivera, Alejandra; Truong, Vinh; Antounians, Lina; Zwingerman, Nora; Carrasco, Giovana; Strug, Lisa; Wells, Phil; Trégouët, David-Alexandre; Morange, Pierre-Emmanuel; Wilson, Michael D; Gagnon, France
2017-07-01
Tissue factor pathway inhibitor (TFPI) regulates the formation of intravascular blood clots, which manifest clinically as ischemic heart disease, ischemic stroke, and venous thromboembolism (VTE). TFPI plasma levels are heritable, but the genetics underlying TFPI plasma level variability are poorly understood. Herein we report the first genome-wide association scan (GWAS) of TFPI plasma levels, conducted in 251 individuals from five extended French-Canadian Families ascertained on VTE. To improve discovery, we also applied a hypothesis-driven (HD) GWAS approach that prioritized single nucleotide polymorphisms (SNPs) in (1) hemostasis pathway genes, and (2) vascular endothelial cell (EC) regulatory regions, which are among the highest expressers of TFPI. Our GWAS identified 131 SNPs with suggestive evidence of association (P-value < 5 × 10 -8 ), but no SNPs reached the genome-wide threshold for statistical significance. Hemostasis pathway genes were not enriched for TFPI plasma level associated SNPs (global hypothesis test P-value = 0.147), but EC regulatory regions contained more TFPI plasma level associated SNPs than expected by chance (global hypothesis test P-value = 0.046). We therefore stratified our genome-wide SNPs, prioritizing those in EC regulatory regions via stratified false discovery rate (sFDR) control, and reranked the SNPs by q-value. The minimum q-value was 0.27, and the top-ranked SNPs did not show association evidence in the MARTHA replication sample of 1,033 unrelated VTE cases. Although this study did not result in new loci for TFPI, our work lays out a strategy to utilize epigenomic data in prioritization schemes for future GWAS studies. © 2017 WILEY PERIODICALS, INC.
Ayotte, Joseph D.; Toppin, Kenneth W.
1995-01-01
The U.S. Geological Survey, in cooperation with the State of New Hampshire, Department of Environmental Services, Water Resources Division has assessed the geohydrology and water quality of stratified-drift aquifers in the middle Merrimack River basin in south-central New Hampshire. The middle Merrimack River basin drains 469 square miles; 98 square miles is underlain by stratified-drift aquifers. Saturated thickness of stratified drift within the study area is generally less than 40 feet but locally greater than 100 feet. Transmissivity of stratified-drift aquifers is generally less than 2,000 feet squared per day but locally exceeds 6, 000 feet squared per day. At present (1990), ground-water withdrawals from stratified drift for public supply are about 0.4 million gallons per day within the basin. Many of the stratified-drift aquifers within the study area are not developed to their fullest potential. The geohydrology of stratified-drift aquifers was investigated by focusing on basic aquifer properties, including aquifer boundaries; recharge, discharge, and direction of ground-water flow; saturated thickness and storage; and transmissivity. Surficial geologic mapping assisted in the determination of aquifer boundaries. Data from 757 wells and test borings were used to produce maps of water-table altitude, saturated thickness, and transmissivity of stratified drift. More than 10 miles of seismic-refraction profiling and 14 miles of seismic-reflection profiling were also used to construct the water table and saturated-thickness maps. Stratified-drift aquifers in the southern, western, and central parts of the study area are typically small and discontinuous, whereas aquifers in the eastern part along the Merrimack River valley are continuous. The Merrimack River valley aquifers formed in glacial Lakes Merrimack and Hooksett. Many other smaller discontinuous aquifers formed in small temporary ponds during deglaciation. A stratified-drift aquifer in Goffstown was analyzed for aquifer yield by use of a two-dimensional, finite-difference ground-water-flow model. Yield of the Goffstown aquifer was estimated to be 2.5 million gallons per day. Sensitivity analysis showed that the estimate of aquifer yield was most sensitive to changes in hydraulic conductivity. The amount of water induced into the aquifer from the Piscataquog River was most affected by changes in estimates of streambed conductance. Results of analysis of water samples from 10 test wells indicate that, with some exceptions, water in the stratified-drift aquifers generally meets U.S. Environmental Protection Agency primary and secondary drinking-water regulations. Water from two wells had elevated sodium concentrations, waterfront two wells had elevated concentrations of dissolved iron, and waterfront seven wells had elevated concentrations of manganese. Known areas of contamination were avoided during water-quality sampling.
Development of portable defocusing micro-scale spatially offset Raman spectroscopy.
Realini, Marco; Botteon, Alessandra; Conti, Claudia; Colombo, Chiara; Matousek, Pavel
2016-05-10
We present, for the first time, portable defocusing micro-Spatially Offset Raman Spectroscopy (micro-SORS). Micro-SORS is a concept permitting the analysis of thin, highly turbid stratified layers beyond the reach of conventional Raman microscopy. The technique is applicable to the analysis of painted layers in cultural heritage (panels, canvases and mural paintings, painted statues and decorated objects in general) as well as in many other areas including polymer, biological and biomedical applications, catalytic and forensics sciences where highly turbid stratified layers are present and where invasive analysis is undesirable or impossible. So far the technique has been demonstrated only on benchtop Raman microscopes precluding the non-invasive analysis of larger samples and samples in situ. The new set-up is characterised conceptually on a range of artificially assembled two-layer systems demonstrating its benefits and performance across several application areas. These included stratified polymer sample, pharmaceutical tablet and layered paint samples. The same samples were also analysed by a high performance (non-portable) benchtop Raman microscope to provide benchmarking against our earlier research. The realisation of the vision of delivering portability to micro-SORS has a transformative potential spanning across multiple disciplines as it fully unlocks, for the first time, the non-invasive and non-destructive aspects of micro-SORS enabling it to be applied also to large and non-portable samples in situ without recourse to removing samples, or their fragments, for laboratory analysis on benchtop Raman microscopes.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Lommen, Jonathan M; Flassbeck, Sebastian; Behl, Nicolas G R; Niesporek, Sebastian; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2018-08-01
To investigate and to reduce influences on the determination of the short and long apparent transverse relaxation times ( T2,s*, T2,l*) of 23 Na in vivo with respect to signal sampling. The accuracy of T2* determination was analyzed in simulations for five different sampling schemes. The influence of noise in the parameter fit was investigated for three different models. A dedicated sampling scheme was developed for brain parenchyma by numerically optimizing the parameter estimation. This scheme was compared in vivo to linear sampling at 7T. For the considered sampling schemes, T2,s* / T2,l* exhibit an average bias of 3% / 4% with a variation of 25% / 15% based on simulations with previously published T2* values. The accuracy could be improved with the optimized sampling scheme by strongly averaging the earliest sample. A fitting model with constant noise floor can increase accuracy while additional fitting of a noise term is only beneficial in case of sampling until late echo time > 80 ms. T2* values in white matter were determined to be T2,s* = 5.1 ± 0.8 / 4.2 ± 0.4 ms and T2,l* = 35.7 ± 2.4 / 34.4 ± 1.5 ms using linear/optimized sampling. Voxel-wise T2* determination of 23 Na is feasible in vivo. However, sampling and fitting methods have to be chosen carefully to retrieve accurate results. Magn Reson Med 80:571-584, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Qiang, Wei
2011-12-01
We describe a sampling scheme for the two-dimensional (2D) solid state NMR experiments, which can be readily applied to the sensitivity-limited samples. The sampling scheme utilizes continuous, non-uniform sampling profile for the indirect dimension, i.e. the acquisition number decreases as a function of the evolution time ( t1) in the indirect dimension. For a beta amyloid (Aβ) fibril sample, we observed overall 40-50% signal enhancement by measuring the cross peak volume, while the cross peak linewidths remained comparable to the linewidths obtained by regular sampling and processing strategies. Both the linear and Gaussian decay functions for the acquisition numbers result in similar percentage of increment in signal. In addition, we demonstrated that this sampling approach can be applied with different dipolar recoupling approaches such as radiofrequency assisted diffusion (RAD) and finite-pulse radio-frequency-driven recoupling (fpRFDR). This sampling scheme is especially suitable for the sensitivity-limited samples which require long signal averaging for each t1 point, for instance the biological membrane proteins where only a small fraction of the sample is isotopically labeled.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
A new sampling scheme for tropical forest monitoring using satellite imagery
Frederic Achard; Tim Richards; Javier Gallego
2000-01-01
At the global level, a sampling scheme for tropical forest change assessment, using high resolution satellite images, has been defined using sampling units independent of any particular satellite sensor. For this purpose, a sampling frame has been chosen a hexagonal tessellation of 3,600 km².
Wang, Sheng; Qian, Xin; Han, Bo-Ping; Luo, Lian-Cong; Hamilton, David P
2012-05-15
Thermal regime is strongly associated with hydrodynamics in water, and it plays an important role in the dynamics of water quality and ecosystem succession of stratified reservoirs. Changes in both climate and hydrological conditions can modify thermal regimes. Liuxihe Reservoir (23°45'50″N; 113°46'52″E) is a large, stratified and deep reservoir in Guangdong Province, located at the Tropic of Cancer of southern China. The reservoir is a warm monomictic water body with a long period of summer stratification and a short period of mixing in winter. The vertical distribution of suspended particulate material and nutrients are influenced strongly by the thermal structure and the associated flow fields. The hypolimnion becomes anoxic in the stratified period, increasing the release of nutrients from the bottom sediments. Fifty-one years of climate and reservoir operational observations are used here to show the marked changes in local climate and reservoir operational schemes. The data show increasing air temperature and more violent oscillations in inflow volumes in the last decade, while the inter-annual water level fluctuations tend to be more moderate. To quantify the effects of changes in climate and hydrological conditions on thermal structure, we used a numerical simulation model to create scenarios incorporating different air temperatures, inflow volumes, and water levels. The simulations indicate that water column stability, the duration of the mixing period, and surface and outflow temperatures are influenced by both natural factors and by anthropogenic factors such as climate change and reservoir operation schemes. Under continuous warming and more stable storage in recent years, the simulations indicate greater water column stability and increased duration of stratification, while irregular large discharge events may reduce stability and lead to early mixing in autumn. Our results strongly suggest that more attention should be focused on water quality in years of extreme climate variation and hydrological conditions, and selective withdrawal of deep water may provide an efficient means to reduce internal loading in warm years. Copyright © 2012 Elsevier Ltd. All rights reserved.
Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar
2017-01-01
This paper presents an automatic classification system for segregating pathological brain from normal brains in magnetic resonance imaging scanning. The proposed system employs contrast limited adaptive histogram equalization scheme to enhance the diseased region in brain MR images. Two-dimensional stationary wavelet transform is harnessed to extract features from the preprocessed images. The feature vector is constructed using the energy and entropy values, computed from the level- 2 SWT coefficients. Then, the relevant and uncorrelated features are selected using symmetric uncertainty ranking filter. Subsequently, the selected features are given input to the proposed AdaBoost with support vector machine classifier, where SVM is used as the base classifier of AdaBoost algorithm. To validate the proposed system, three standard MR image datasets, Dataset-66, Dataset-160, and Dataset- 255 have been utilized. The 5 runs of k-fold stratified cross validation results indicate the suggested scheme offers better performance than other existing schemes in terms of accuracy and number of features. The proposed system earns ideal classification over Dataset-66 and Dataset-160; whereas, for Dataset- 255, an accuracy of 99.45% is achieved. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Sampling error in timber surveys
Austin Hasel
1938-01-01
Various sampling strategies are evaluated for efficiency in an interior ponderosa pine forest. In a 5760 acre tract, efficiency was gained by stratifying into quarter acre blocks and sampling randomly from within. A systematic cruise was found to be superior for volume estimation.
Sediment toxicity test results for the Urban Waters Study 2010, Bellingham Bay, Washington
Biedenbach, James M.
2011-01-01
The Washington Department of Ecology annually determines the quality of recently deposited sediments in Puget Sound as a part of Ecology's Urban Waters Initiative. The annual sediment quality studies use the Sediment Quality Triad (SQT) approach, thus relying on measures of chemical contamination, toxicity, and benthic in-faunal effects (Chapman, 1990). Since 2002, the studies followed a rotating sampling scheme, each year sampling a different region of the greater Puget Sound Basin. During the annual studies, samples are collected in locations selected with a stratified-random design, patterned after the designs previously used in baseline surveys completed during 1997-1999 (Long and others, 2003; Wilson and Partridge, 2007). Sediment samples were collected by personnel from the Washington Department of Ecology, in June of 2010 and shipped to the U. S. Geological Survey (USGS) laboratory in Corpus Christi, Texas (not shown), where the tests were performed. Sediment pore water was extracted with a pneumatic apparatus and was stored frozen. Just before testing, water-quality measurements were made and salinity adjusted, if necessary. Tests were performed on a dilution series of each sample consisting of 100-, 50-, and 25-percent pore-water concentrations. The specific objectives of this study were to: * Extract sediment pore water from a total of 30 sediment samples from the Bellingham Bay, Washington area within a day of receipt of the samples. * Measure water-quality parameters (salinity, dissolved oxygen, pH, sulfide, and ammonia) of thawed pore-water samples before testing and adjust salinity, temperature and dissolved oxygen, if necessary, to obtain optimal ranges for the test species. * Conduct the fertilization toxicity test with pore water using sea urchin (Stronylocentrotus purpuratus) (S. purpuratus) gametes. * Perform quality control assays with reference pore water, dilution blanks and a positive control dilution series with sodium dodecyl sulfate (SDS) in conjunction with each test. * Determine which samples caused a significant decrease in percent fertilization success relative to the negative control.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
NASA Astrophysics Data System (ADS)
Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas
2016-09-01
Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.
Matching soil salinization and cropping systems in communally managed irrigation schemes
NASA Astrophysics Data System (ADS)
Malota, Mphatso; Mchenga, Joshua
2018-03-01
Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi < 2 dS/m), the irrigation water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.
Sticky trap and stem-tap sampling protocols for the Asian citrus psyllid (Hemiptera: Psyllidae)
USDA-ARS?s Scientific Manuscript database
Sampling statistics were obtained to develop a sampling protocol for estimating numbers of adult Diaphorina citri in citrus using two different sampling methods: yellow sticky traps and stem–tap samples. A 4.0 ha block of mature orange trees was stratified into ten 0.4 ha strata and sampled using...
Increasing cancer detection yield of breast MRI using a new CAD scheme of mammograms
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Hollingsworth, Alan B.; Stough, Rebecca G.; Liu, Hong; Zheng, Bin
2016-03-01
Although breast MRI is the most sensitive imaging modality to detect early breast cancer, its cancer detection yield in breast cancer screening is quite low (< 3 to 4% even for the small group of high-risk women) to date. The purpose of this preliminary study is to test the potential of developing and applying a new computer-aided detection (CAD) scheme of digital mammograms to identify women at high risk of harboring mammography-occult breast cancers, which can be detected by breast MRI. For this purpose, we retrospectively assembled a dataset involving 30 women who had both mammography and breast MRI screening examinations. All mammograms were interpreted as negative, while 5 cancers were detected using breast MRI. We developed a CAD scheme of mammograms, which include a new quantitative mammographic image feature analysis based risk model, to stratify women into two groups with high and low risk of harboring mammography-occult cancer. Among 30 women, 9 were classified into the high risk group by CAD scheme, which included all 5 women who had cancer detected by breast MRI. All 21 low risk women remained negative on the breast MRI examinations. The cancer detection yield of breast MRI applying to this dataset substantially increased from 16.7% (5/30) to 55.6% (5/9), while eliminating 84% (21/25) unnecessary breast MRI screenings. The study demonstrated the potential of applying a new CAD scheme to significantly increase cancer detection yield of breast MRI, while simultaneously reducing the number of negative MRIs in breast cancer screening.
Ratnarajan, Gokulan; Kean, Jane; French, Karen; Parker, Mike; Bourne, Rupert
2015-09-01
To establish the safety of the CHANGES glaucoma referral refinement scheme (GRRS). The CHANGES scheme risk stratifies glaucoma referrals, with low risk referrals seen by a community based specialist optometrist (OSI) while high risk referrals are referred directly to the hospital. In this study, those patients discharged by the OSI were reviewed by the consultant ophthalmologist to establish a 'false negative' rate (Study 1). Virtual review of optic disc photographs was carried out both by a hospital-based specialist optometrist as well as the consultant ophthalmologist (Study 2). None of these 34 discharged patients seen by the consultant were found to have glaucoma or started on treatment to lower the intra-ocular pressure. Five of the 34 (15%) were classified as 'glaucoma suspect' based on the appearance of the optic disc and offered a follow-up appointment. Virtual review by both the consultant and optometrist had a sensitivity of 80%, whilst the false positive rate for the optometrist was 3.4%, and 32% for the consultant (p < 0.05). The false negative rate of the OSIs in the CHANGES scheme was 15%, however there were no patients where glaucoma was missed. Virtual review in experienced hands can be as effective as clinical review by a consultant, and is a valid method to ensure glaucoma is not missed in GRRS. The CHANGES scheme, which includes virtual review, is effective at reducing referrals to the hospital whilst not compromising patient safety. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Cross sectional study of young people's awareness of and involvement with tobacco marketing.
MacFadyen, L; Hastings, G; MacKintosh, A M
2001-03-03
To examine young people's awareness of and involvement with tobacco marketing and to determine the association, if any, between this and their smoking behaviour. Cross sectional, quantitative survey, part interview and part self completion, administered in respondents' homes. North east England. Stratified random sample of 629 young people aged 15 and 16 years who had "opted in" to research through a postal consent procedure. There was a high level of awareness of and involvement in tobacco marketing among the 15-16 year olds sampled in the study: around 95% were aware of advertising and all were aware of some method of point of sale marketing. Awareness of and involvement with tobacco marketing were both significantly associated with being a smoker: for example, 30% (55/185) of smokers had received free gifts through coupons in cigarette packs, compared with 11% (21/199) of non-smokers (P<0.001). When other factors known to be linked with teenage smoking were held constant, awareness of coupon schemes, brand stretching, and tobacco marketing in general were all independently associated with current smoking status. Teenagers are aware of, and are participating in, many forms of tobacco marketing, and both awareness and participation are associated with current smoking status. This suggests that the current voluntary regulations designed to protect young people from smoking are not working, and that statutory regulations are required.
Chang, Ye; Guo, Xiaofan; Guo, Liang; Li, Zhao; Yang, Hongmei; Yu, Shasha; Sun, Guozhe; Sun, Yingxian
2016-01-01
This study aimed to comprehensively compare the general characteristics, lifestyles, serum parameters, ultrasonic cardiogram (UCG) parameters, depression, quality of life, and various comorbidities between empty nest and non-empty nest elderly among rural populations in northeast China. This analysis was based on our previous study which was conducted from January 2012 to August 2013, using a multistage, stratified, random cluster sampling scheme. The final analyzed sample consisted of 3208 participants aged no less than 60 years, which was further classified into three groups: non-empty nest group, empty nest group (living as a couple), and empty nest group (living alone). More than half of the participants were empty nest elderly (60.5%). There were no significant statistical differences for serum parameters, UCG parameters, lifestyles, dietary pattern, and scores of Patient Health Questionnaire-9 (PHQ-9) and World Health Organization Quality of Life questionnaire, abbreviated version (WHOQOL-BREF) among the three groups. Empty nest elderly showed no more risk for comorbidities such as general obesity, abdominal obesity, hyperuricemia, hyperhomocysteinemia, diabetes, dyslipidemia, left atrial enlargement (LAE), and stroke. Our study indicated that empty nest elderly showed no more risk for depression, low quality of life and comorbidities such as general obesity, abdominal obesity, hyperuricemia, hyperhomocysteinemia, diabetes, dyslipidemia, LAE, and stroke among rural populations in northeast China. PMID:27618905
Chang, Ye; Guo, Xiaofan; Guo, Liang; Li, Zhao; Yang, Hongmei; Yu, Shasha; Sun, Guozhe; Sun, Yingxian
2016-08-27
This study aimed to comprehensively compare the general characteristics, lifestyles, serum parameters, ultrasonic cardiogram (UCG) parameters, depression, quality of life, and various comorbidities between empty nest and non-empty nest elderly among rural populations in northeast China. This analysis was based on our previous study which was conducted from January 2012 to August 2013, using a multistage, stratified, random cluster sampling scheme. The final analyzed sample consisted of 3208 participants aged no less than 60 years, which was further classified into three groups: non-empty nest group, empty nest group (living as a couple), and empty nest group (living alone). More than half of the participants were empty nest elderly (60.5%). There were no significant statistical differences for serum parameters, UCG parameters, lifestyles, dietary pattern, and scores of Patient Health Questionnaire-9 (PHQ-9) and World Health Organization Quality of Life questionnaire, abbreviated version (WHOQOL-BREF) among the three groups. Empty nest elderly showed no more risk for comorbidities such as general obesity, abdominal obesity, hyperuricemia, hyperhomocysteinemia, diabetes, dyslipidemia, left atrial enlargement (LAE), and stroke. Our study indicated that empty nest elderly showed no more risk for depression, low quality of life and comorbidities such as general obesity, abdominal obesity, hyperuricemia, hyperhomocysteinemia, diabetes, dyslipidemia, LAE, and stroke among rural populations in northeast China.
Grey W. Pendleton
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation...
Residents Living in Residential Care Facilities: United States, 2010
... NSRCF used a stratified two-stage probability sample design. The first stage was the selection of RCFs ... was 99%. A detailed description of NSRCF sampling design, data collection, and procedures is provided both in ...
Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.
2017-12-01
Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error associated with geometric and harmonic means.
Statistical Analysis for Collision-free Boson Sampling.
Huang, He-Liang; Zhong, Han-Sen; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su
2017-11-10
Boson sampling is strongly believed to be intractable for classical computers but solvable with photons in linear optics, which raises widespread concern as a rapid way to demonstrate the quantum supremacy. However, due to its solution is mathematically unverifiable, how to certify the experimental results becomes a major difficulty in the boson sampling experiment. Here, we develop a statistical analysis scheme to experimentally certify the collision-free boson sampling. Numerical simulations are performed to show the feasibility and practicability of our scheme, and the effects of realistic experimental conditions are also considered, demonstrating that our proposed scheme is experimentally friendly. Moreover, our broad approach is expected to be generally applied to investigate multi-particle coherent dynamics beyond the boson sampling.
Moore, R.B.; Medalie, Laura
1995-01-01
Stratified-drift aquifers discontinuously underlie 152.5 square miles of the Saco and Ossipee River Basins, which have a total drainage area of 869.4 square miles. Saturated thicknesses of stratified drift in the study area are locally greater than 280 feet, but generally are less. Transmissivity locally exceeds 8,000 feet squared per day but are generally less. About 93.6 square miles, or 10.8 percent of the study area, are identified as having transmissivity greater than 1,000 feet squared per day. The stratified-drift aquifer in Ossipee, Freedom, Effingham, Madison, and Tamworth was analyzed for the availability of ground water by use of transient simulations and a two-dimensional, finite-difference ground-water-flow model. The numerical -model results indicate that potential available water amounts in this aquifer are 7.72 million gallons per day. Sample results of water- quality analyses obtained from 25 test wells and 4 springs indicated that water was generally suitable for drinking and other domestic purposes. Concen- trations of dissolved constituents in ground-water samples are less than or meet U.S. Environmental Protection Agency (USEPA)primary and secondary drinking-water regulations. Concentrations of inorganic constituents that exceeded the USEPA's secondary regulations were chloride and sodium, iron manganese, and fluoride.
Public attitudes toward stuttering in Turkey: probability versus convenience sampling.
Ozdemir, R Sertan; St Louis, Kenneth O; Topbaş, Seyhun
2011-12-01
A Turkish translation of the Public Opinion Survey of Human Attributes-Stuttering (POSHA-S) was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. A convenience sample of adults in Eskişehir, Turkey was compared with two replicates of a school-based, probability cluster sampling scheme. The two replicates of the probability sampling scheme yielded similar demographic samples, both of which were different from the convenience sample. Components of subscores on the POSHA-S were significantly different in more than half of the comparisons between convenience and probability samples, indicating important differences in public attitudes. If POSHA-S users intend to generalize to specific geographic areas, results of this study indicate that probability sampling is a better research strategy than convenience sampling. The reader will be able to: (1) discuss the difference between convenience sampling and probability sampling; (2) describe a school-based probability sampling scheme; and (3) describe differences in POSHA-S results from convenience sampling versus probability sampling. Copyright © 2011 Elsevier Inc. All rights reserved.
An Overview of Recent Advances in Event-Triggered Consensus of Multiagent Systems.
Ding, Lei; Han, Qing-Long; Ge, Xiaohua; Zhang, Xian-Ming
2018-04-01
Event-triggered consensus of multiagent systems (MASs) has attracted tremendous attention from both theoretical and practical perspectives due to the fact that it enables all agents eventually to reach an agreement upon a common quantity of interest while significantly alleviating utilization of communication and computation resources. This paper aims to provide an overview of recent advances in event-triggered consensus of MASs. First, a basic framework of multiagent event-triggered operational mechanisms is established. Second, representative results and methodologies reported in the literature are reviewed and some in-depth analysis is made on several event-triggered schemes, including event-based sampling schemes, model-based event-triggered schemes, sampled-data-based event-triggered schemes, and self-triggered sampling schemes. Third, two examples are outlined to show applicability of event-triggered consensus in power sharing of microgrids and formation control of multirobot systems, respectively. Finally, some challenging issues on event-triggered consensus are proposed for future research.
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Representativeness-based sampling network design for the State of Alaska
Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove
2013-01-01
Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...
CTEPP STANDARD OPERATING PROCEDURE FOR SAMPLE SELECTION (SOP-1.10)
The procedures for selecting CTEPP study subjects are described in the SOP. The primary, county-level stratification is by region and urbanicity. Six sample counties in each of the two states (North Carolina and Ohio) are selected using stratified random sampling and reflect ...
Sampling High-Altitude and Stratified Mating Flights of Red Imported Fire Ant
USDA-ARS?s Scientific Manuscript database
With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens ...
Access to environmental resources and physical activity levels of adults in Hawaii.
Geller, Karly S; Nigg, Claudio R; Ollberding, Nicholas J; Motl, Robert W; Horwath, Caroline; Dishman, Rodney K
2015-03-01
Examine associations between physical activity (PA) and spatial accessibility to environmental PA resources in Hawaii. Metabolic equivalents (METs) of mild, moderate, and strenuous PA were compared for accessibility with environmental PA resources within a population-based sample of Hawaiian adults (n = 381). Multiple linear regression estimated differences in PA levels for residing further from a PA resource or residing in an area with a greater number of resources. No associations were found in the total sample. Analyses within subsamples stratified by ethnicity revealed that greater spatial accessibility to a PA resource was positively associated with strenuous PA among Caucasians (P = .04) but negatively associated with moderate PA among Native Hawaiians (P = .00). The lack of association in the total sample may be a consequence of Hawaii's unique environment. Results of stratified sample analyses are unique, providing groundwork for future examinations within parallel environments and among similar ethnic groups. © 2012 APJPH.
Optimal updating magnitude in adaptive flat-distribution sampling
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery
2017-11-01
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Optimal updating magnitude in adaptive flat-distribution sampling.
Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery
2017-11-07
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
A comparison of two sampling designs for fish assemblage assessment in a large river
Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hayes, Daniel
2014-01-01
We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.
Parallel Computing of Upwelling in a Rotating Stratified Flow
NASA Astrophysics Data System (ADS)
Cui, A.; Street, R. L.
1997-11-01
A code for the three-dimensional, unsteady, incompressible, and turbulent flow has been implemented on the IBM SP2, using message passing. The effects of rotation and variable density are included. A finite volume method is used to discretize the Navier-Stokes equations in general curvilinear coordinates on a non-staggered grid. All the spatial derivatives are approximated using second-order central differences with the exception of the convection terms, which are handled with special upwind-difference schemes. The semi-implicit, second-order accurate, time-advancement scheme employs the Adams-Bashforth method for the explicit terms and Crank-Nicolson for the implicit terms. A multigrid method, with the four-color ZEBRA as smoother, is used to solve the Poisson equation for pressure, while the momentum equations are solved with an approximate factorization technique. The code was successfully validated for a variety test cases. Simulations of a laboratory model of coastal upwelling in a rotating annulus are in progress and will be presented.
Direct numerical simulation of the sea flows around blunt bodies
NASA Astrophysics Data System (ADS)
Matyushin, Pavel V.; Gushchin, Valentin A.
2015-11-01
The aim of the present paper is the demonstration of the opportunities of the mathematical modeling of the separated flows of the sea water around blunt bodies on the basis of the Navier-Stokes equations (NSE) in the Boussinesq approximation. The 3D density stratified incompressible viscous fluid flows around a sphere have been investigated by means of the direct numerical simulation (DNS) on supercomputers and the visualization of the 3D vortex structures in the wake. For solving of NSE the Splitting on physical factors Method for Incompressible Fluid flows (SMIF) with hybrid explicit finite difference scheme (second-order accuracy in space, minimum scheme viscosity and dispersion, capable for work in wide range of the Reynolds (Re) and the internal Froude (Fr) numbers and monotonous) has been developed and successfully applied. The different transitions in sphere wakes with increasing of Re (10 < Re < 500) and decreasing of Fr (0.005 < Fr < 100) have been investigated in details. Thus the classifications of the viscous fluid flow regimes around a sphere have been refined.
A cross-sectional investigation of the quality of selected medicines in Cambodia in 2010
2014-01-01
Background Access to good-quality medicines in many countries is largely hindered by the rampant circulation of spurious/falsely labeled/falsified/counterfeit (SFFC) and substandard medicines. In 2006, the Ministry of Health of Cambodia, in collaboration with Kanazawa University, Japan, initiated a project to combat SFFC medicines. Methods To assess the quality of medicines and prevalence of SFFC medicines among selected products, a cross-sectional survey was carried out in Cambodia. Cefixime, omeprazole, co-trimoxazole, clarithromycin, and sildenafil were selected as candidate medicines. These medicines were purchased from private community drug outlets in the capital, Phnom Penh, and Svay Rieng and Kandal provinces through a stratified random sampling scheme in July 2010. Results In total, 325 medicine samples were collected from 111 drug outlets. Non-licensed outlets were more commonly encountered in rural than in urban areas (p < 0.01). Of all the samples, 93.5% were registered and 80% were foreign products. Samples without registration numbers were found more frequently among foreign-manufactured products than in domestic ones (p < 0.01). According to pharmacopeial analytical results, 14.5%, 4.6%, and 24.6% of the samples were unacceptable in quantity, content uniformity, and dissolution test, respectively. All the ultimately unacceptable samples in the content uniformity tests were of foreign origin. Following authenticity investigations conducted with the respective manufacturers and medicine regulatory authorities, an unregistered product of cefixime collected from a pharmacy was confirmed as an SFFC medicine. However, the sample was acceptable in quantity, content uniformity, and dissolution test. Conclusions The results of this survey indicate that medicine counterfeiting is not limited to essential medicines in Cambodia: newer-generation medicines are also targeted. Concerted efforts by both domestic and foreign manufacturers, wholesalers, retailers, and regulatory authorities should help improve the quality of medicines. PMID:24593851
Zhang, Peng; Gao, Chunshi; Li, Zhijun; Lv, Xin; Song, Yuanyuan; Yu, Yaqin; Li, Bo
2016-01-01
Objectives This study aims to estimate the prevalence of overweight and obesity and determine potential influencing factors among adults in northeast China. Methods A cross-sectional survey was conducted in Jilin Province, northeast China, in 2012. A total of 9873 men and 10 966 women aged 18–79 years from the general population were included using a multistage stratified random cluster sampling design. Data were obtained from face-to-face interview and physical examination. After being weighted according to a complex sampling scheme, the sample was used to estimate the prevalence of overweight (body mass index (BMI) 24–27.9 kg/m2) and obesity (BMI >28 kg/m2) in Jilin Province, and analyse influencing factors through corresponding statistical methods based on complex sampling design behaviours. Results The overall prevalence of overweight was 32.3% (male 34.3%; female 30.2%), and the prevalence of obesity was 14.6% (male 16.3%; female 12.8%) in Jilin Province. The prevalence of both overweight and obesity were higher in men than women (p<0.001). Influencing factors included sex, age, marriage status, occupation, smoking, drinking, diet and hours of sleep (p<0.05). Conclusions This study estimated that the prevalence of overweight and obesity among adult residents of Jilin Province, northeast China, were high. The results of this study will be submitted to the Health Department of Jilin Province and other relevant departments as a reference, which should inform policy makers in developing education and publicity to prevent and control the occurrence of overweight and obesity. PMID:27456326
Wang, Rui; Zhang, Peng; Gao, Chunshi; Li, Zhijun; Lv, Xin; Song, Yuanyuan; Yu, Yaqin; Li, Bo
2016-07-25
This study aims to estimate the prevalence of overweight and obesity and determine potential influencing factors among adults in northeast China. A cross-sectional survey was conducted in Jilin Province, northeast China, in 2012. A total of 9873 men and 10 966 women aged 18-79 years from the general population were included using a multistage stratified random cluster sampling design. Data were obtained from face-to-face interview and physical examination. After being weighted according to a complex sampling scheme, the sample was used to estimate the prevalence of overweight (body mass index (BMI) 24-27.9 kg/m(2)) and obesity (BMI >28 kg/m(2)) in Jilin Province, and analyse influencing factors through corresponding statistical methods based on complex sampling design behaviours. The overall prevalence of overweight was 32.3% (male 34.3%; female 30.2%), and the prevalence of obesity was 14.6% (male 16.3%; female 12.8%) in Jilin Province. The prevalence of both overweight and obesity were higher in men than women (p<0.001). Influencing factors included sex, age, marriage status, occupation, smoking, drinking, diet and hours of sleep (p<0.05). This study estimated that the prevalence of overweight and obesity among adult residents of Jilin Province, northeast China, were high. The results of this study will be submitted to the Health Department of Jilin Province and other relevant departments as a reference, which should inform policy makers in developing education and publicity to prevent and control the occurrence of overweight and obesity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Estimating the occupancy of spotted owl habitat areas by sampling and adjusting for bias
David L. Azuma; James A. Baldwin; Barry R. Noon
1990-01-01
A basic sampling scheme is proposed to estimate the proportion of sampled units (Spotted Owl Habitat Areas (SOHAs) or randomly sampled 1000-acre polygon areas (RSAs)) occupied by spotted owl pairs. A bias adjustment for the possibility of missing a pair given its presence on a SOHA or RSA is suggested. The sampling scheme is based on a fixed number of visits to a...
A Noise-Filtered Under-Sampling Scheme for Imbalanced Classification.
Kang, Qi; Chen, XiaoShuang; Li, SiSi; Zhou, MengChu
2017-12-01
Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, -measure, and -mean.
Optical sectioning in induced coherence tomography with frequency-entangled photons
NASA Astrophysics Data System (ADS)
Vallés, Adam; Jiménez, Gerard; Salazar-Serrano, Luis José; Torres, Juan P.
2018-02-01
We demonstrate a different scheme to perform optical sectioning of a sample based on the concept of induced coherence [Zou et al., Phys. Rev. Lett. 67, 318 (1991), 10.1103/PhysRevLett.67.318]. This can be viewed as a different type of optical coherence tomography scheme where the varying reflectivity of the sample along the direction of propagation of an optical beam translates into changes of the degree of first-order coherence between two beams. As a practical advantage the scheme allows probing the sample with one wavelength and measuring photons with another wavelength. In a bio-imaging scenario, this would result in a deeper penetration into the sample because of probing with longer wavelengths, while still using the optimum wavelength for detection. The scheme proposed here could achieve submicron axial resolution by making use of nonlinear parametric sources with broad spectral bandwidth emission.
NASA Astrophysics Data System (ADS)
Hamdan, L. J.; Sikaroodi, M.; Coffin, R. B.; Gillevet, P. M.
2010-12-01
A culture-independent phylogenetic study of microbial communities in water samples and sediment cores recovered from the Beaufort Sea slope east of Point Barrow, Alaska was conducted. The goal of the work was to describe community composition in sediment and water samples and determine the influence of local environmental conditions on microbial populations. Archaeal and bacterial community composition was studied using length heterogeneity-polymerase chain reaction (LH-PCR) and multitag pyrosequencing (MTPS). Sediment samples were obtained from three piston cores on the slope (~1000m depth) arrayed along an east-west transect and one core from a depth of approximately 2000m. Discrete water samples were obtained using a CTD-rosette from three locations adjacent to piston core sites. Water sample were selected at three discrete depths within a vertically stratified (density) water column. The microbial community in near surface waters was distinct from the community observed in deeper stratified layers of the water column. Multidimensional scaling analysis (MDS) revealed that water samples from mid and deep stratified layers bore high similarity to communities in cores collected in close proximity. Overall, the highest diversity (bacteria and archaea) was observed in a core which had elevated methane concentration relative to other locations. Geochemical (e.g., bulk organic and inorganic carbon pools, nutrients, metabolites) and physical data (e.g. depth, water content) were used to reveal the abiotic factors structuring microbial communities. The analysis indicates that sediment water content (porosity) and inorganic carbon concentration are the most significant structuring elements on Beaufort shelf sedimentary microbial communities.
Scott, J.C.
1990-01-01
Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.
Rein, David B
2005-01-01
Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501
Progressive compressive imager
NASA Astrophysics Data System (ADS)
Evladov, Sergei; Levi, Ofer; Stern, Adrian
2012-06-01
We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.
NASA Astrophysics Data System (ADS)
Liao, Kaihua; Zhou, Zhiwen; Lai, Xiaoming; Zhu, Qing; Feng, Huihui
2017-04-01
The identification of representative soil moisture sampling sites is important for the validation of remotely sensed mean soil moisture in a certain area and ground-based soil moisture measurements in catchment or hillslope hydrological studies. Numerous approaches have been developed to identify optimal sites for predicting mean soil moisture. Each method has certain advantages and disadvantages, but they have rarely been evaluated and compared. In our study, surface (0-20 cm) soil moisture data from January 2013 to March 2016 (a total of 43 sampling days) were collected at 77 sampling sites on a mixed land-use (tea and bamboo) hillslope in the hilly area of Taihu Lake Basin, China. A total of 10 methods (temporal stability (TS) analyses based on 2 indices, K-means clustering based on 6 kinds of inputs and 2 random sampling strategies) were evaluated for determining optimal sampling sites for mean soil moisture estimation. They were TS analyses based on the smallest index of temporal stability (ITS, a combination of the mean relative difference and standard deviation of relative difference (SDRD)) and based on the smallest SDRD, K-means clustering based on soil properties and terrain indices (EFs), repeated soil moisture measurements (Theta), EFs plus one-time soil moisture data (EFsTheta), and the principal components derived from EFs (EFs-PCA), Theta (Theta-PCA), and EFsTheta (EFsTheta-PCA), and global and stratified random sampling strategies. Results showed that the TS based on the smallest ITS was better (RMSE = 0.023 m3 m-3) than that based on the smallest SDRD (RMSE = 0.034 m3 m-3). The K-means clustering based on EFsTheta (-PCA) was better (RMSE <0.020 m3 m-3) than these based on EFs (-PCA) and Theta (-PCA). The sampling design stratified by the land use was more efficient than the global random method. Forty and 60 sampling sites are needed for stratified sampling and global sampling respectively to make their performances comparable to the best K-means method (EFsTheta-PCA). Overall, TS required only one site, but its accuracy was limited. The best K-means method required <8 sites and yielded high accuracy, but extra soil and terrain information is necessary when using this method. The stratified sampling strategy can only be used if no pre-knowledge about soil moisture variation is available. This information will help in selecting the optimal methods for estimation the area mean soil moisture.
NASA Astrophysics Data System (ADS)
Zhang, K.; Gasiewski, A. J.
2017-12-01
A horizontally inhomogeneous unified microwave radiative transfer (HI-UMRT) model based upon a nonspherical hydrometeor scattering model is being developed at the University of Colorado at Boulder to facilitate forward radiative simulations for 3-dimensionally inhomogeneous clouds in severe weather. The HI-UMRT 3-D analytical solution is based on incorporating a planar-stratified 1-D UMRT algorithm within a horizontally inhomogeneous iterative perturbation scheme. Single-scattering parameters are computed using the Discrete Dipole Scattering (DDSCAT v7.3) program for hundreds of carefully selected nonspherical complex frozen hydrometeors from the NASA/GSFC DDSCAT database. The required analytic factorization symmetry of transition matrix in a normalized RT equation was analytically proved and validated numerically using the DDSCAT-based full Stokes matrix of randomly oriented hydrometeors. The HI-UMRT model thus inherits the properties of unconditional numerical stability, efficiency, and accuracy from the UMRT algorithm and provides a practical 3-D two-Stokes parameter radiance solution with Jacobian to be used within microwave retrievals and data assimilation schemes. In addition, a fast forward radar reflectivity operator with Jacobian based on DDSCAT backscatter efficiency computed for large hydrometeors is incorporated into the HI-UMRT model to provide applicability to active radar sensors. The HI-UMRT will be validated strategically at two levels: 1) intercomparison of brightness temperature (Tb) results with those of several 1-D and 3-D RT models, including UMRT, CRTM and Monte Carlo models, 2) intercomparison of Tb with observed data from combined passive and active spaceborne sensors (e.g. GPM GMI and DPR). The precise expression for determining the required number of 3-D iterations to achieve an error bound on the perturbation solution will be developed to facilitate the numerical verification of the HI-UMRT code complexity and computation performance.
NASA Astrophysics Data System (ADS)
Fisher, A. W.; Sanford, L. P.; Scully, M. E.; Suttles, S. E.
2016-02-01
Enhancement of wind-driven mixing by Langmuir turbulence (LT) may have important implications for exchanges of mass and momentum in estuarine and coastal waters, but the transient nature of LT and observational constraints make quantifying its impact on vertical exchange difficult. Recent studies have shown that wind events can be of first order importance to circulation and mixing in estuaries, prompting this investigation into the ability of second-moment turbulence closure schemes to model wind-wave enhanced mixing in an estuarine environment. An instrumented turbulence tower was deployed in middle reaches of Chesapeake Bay in 2013 and collected observations of coherent structures consistent with LT that occurred under regions of breaking waves. Wave and turbulence measurements collected from a vertical array of Acoustic Doppler Velocimeters (ADVs) provided direct estimates of TKE, dissipation, turbulent length scale, and the surface wave field. Direct measurements of air-sea momentum and sensible heat fluxes were collected by a co-located ultrasonic anemometer deployed 3m above the water surface. Analyses of the data indicate that the combined presence of breaking waves and LT significantly influences air-sea momentum transfer, enhancing vertical mixing and acting to align stress in the surface mixed layer in the direction of Lagrangian shear. Here these observations are compared to the predictions of commonly used second-moment turbulence closures schemes, modified to account for the influence of wave breaking and LT. LT parameterizations are evaluated under neutrally stratified conditions and buoyancy damping parameterizations are evaluated under stably stratified conditions. We compare predicted turbulent quantities to observations for a variety of wind, wave, and stratification conditions. The effects of fetch-limited wave growth, surface buoyancy flux, and tidal distortion on wave mixing parameterizations will also be discussed.
Zhao, Lei; Li, Songnan; Ma, Xiaohai; Greiser, Andreas; Zhang, Tianjing; An, Jing; Bai, Rong; Dong, Jianzeng; Fan, Zhanming
2016-03-15
T1 mapping enables assessment of myocardial characteristics. As the most common type of arrhythmia, atrial fibrillation (AF) is often accompanied by a variety of cardiac pathologies, whereby the irregular and usually rapid ventricle rate of AF may cause inaccurate T1 estimation due to mis-triggering and inadequate magnetization recovery. We hypothesized that systolic T1 mapping with a heart-rate-dependent (HRD) pulse sequence scheme may overcome this issue. 30 patients with AF and 13 healthy volunteers were enrolled and underwent cardiovascular magnetic resonance (CMR) at 3 T. CMR was repeated for 3 patients after electric cardioversion and for 2 volunteers after lowering heart rate (HR). A Modified Look-Locker Inversion Recovery (MOLLI) sequence was acquired before and 15 min after administration of 0.1 mmol/kg gadopentetate dimeglumine. For AF patients, both the fixed 5(3)3/4(1)3(1)2 and the HRD sampling scheme were performed at diastole and systole, respectively. The HRD pulse sequence sampling scheme was 5(n)3/4(n)3(n)2, where n was determined by the heart rate to ensure adequate magnetization recovery. Image quality of T1 maps was assessed. T1 times were measured in myocardium and blood. Extracellular volume fraction (ECV) was calculated. In volunteers with repeated T1 mapping, the myocardial native T1 and ECV generated from the 1st fixed sampling scheme were smaller than from the 1st HRD and 2nd fixed sampling scheme. In healthy volunteers, the overall native T1 times and ECV of the left ventricle (LV) in diastolic T1 maps were greater than in systolic T1 maps (P < 0.01, P < 0.05). In the 3 AF patients that had received electrical cardioversion therapy, the myocardial native T1 times and ECV generated from the fixed sampling scheme were smaller than in the 1st and 2nd HRD sampling scheme (all P < 0.05). In patients with AF (HR: 88 ± 20 bpm, HR fluctuation: 12 ± 9 bpm), more T1 maps with artifact were found in diastole than in systole (P < 0.01). The overall native T1 times and ECV of the left ventricle (LV) in diastolic T1 maps were greater than systolic T1 maps, either with fixed or HRD sampling scheme (all P < 0.05). Systolic MOLLI T1 mapping with heart-rate-dependent pulse sequence scheme can improve image quality and avoid T1 underestimation. It is feasible and with further validation may extend clinical applicability of T1 mapping to patients with atrial fibrillation.
Wong, Martin C S; Lee, Albert; Sun, Jing; Stewart, Donald; Cheng, Frances F K; Kan, Wing; Ho, Mandy
2009-06-01
The WHO health promoting school (HPS) approach covers key areas including school-based programmes improving students' psychological health, but there have been few studies evaluating the resilience performance of these schools. This study compared the resilience scores between schools within the healthy school award (HSA) scheme (HPS group) and those not (non-HPS group). We conducted a cross-sectional survey of grade-one students (aged 12), all teachers and parents of mainstream secondary schools recruited by stratified random sampling in one large Territory of Hong Kong using validated resilience questionnaires during November-December 2005. Four non-HPS and four HPS secondary schools were recruited, respectively, involving 1408 students, 891 parents and 91 teachers, with similar baseline characteristics. The HPS students were found to have better scores than non-HPS students (average age 12.4 year-old in both groups) in all dimensions with significantly higher scores in 'Peer Support' (p = 0.013), 'Making a Difference' (p = 0.011), 'About Me' (p = 0.027) and 'Generally Happy' (p = 0.011). There was no difference in the scores between non-HPS and HPS parents. The HPS teachers reported significantly higher scores in 'Health Policies' (p = 0.023), 'Social Environment' (p = 0.049), 'School Community Relations' (p = 0.048), 'Personal Skills Building' (p = 0.008) and 'Partnership & Health Services' (p = 0.047). The secondary HPS students and teachers reported significantly higher resilience scores than those of non-HPS. This study shows that the HSA scheme under WHO has the potential to exert positive changes in students and teachers and the concept of HPS is effective in building resilience among major school stakeholders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bittner, Nathan; Merrick, Gregory S., E-mail: gmerrick@urologicresearchinstitute.or; Galbreath, Robert W.
2010-11-15
Purpose: Standard prostate biopsy schemes underestimate Gleason score in a significant percentage of cases. Extended biopsy improves diagnostic accuracy and provides more reliable prognostic information. In this study, we tested the hypothesis that greater biopsy core number should result in improved treatment outcome through better tailoring of therapy. Methods and Materials: From April 1995 to May 2006, 1,613 prostate cancer patients were treated with permanent brachytherapy. Patients were divided into five groups stratified by the number of prostate biopsy cores ({<=}6, 7-9, 10-12, 13-20, and >20 cores). Biochemical progression-free survival (bPFS), cause-specific survival (CSS), and overall survival (OS) were evaluatedmore » as a function of core number. Results: The median patient age was 66 years, and the median preimplant prostate-specific antigen was 6.5 ng/mL. The overall 10-year bPFS, CSS, and OS were 95.6%, 98.3%, and 78.6%, respectively. When bPFS was analyzed as a function of core number, the 10-year bPFS for patients with >20, 13-20, 10-12, 7-9 and {<=}6 cores was 100%, 100%, 98.3%, 95.8%, and 93.0% (p < 0.001), respectively. When evaluated by treatment era (1995-2000 vs. 2001-2006), the number of biopsy cores remained a statistically significant predictor of bPFS. On multivariate analysis, the number of biopsy cores was predictive of bPFS but did not predict for CSS or OS. Conclusion: Greater biopsy core number was associated with a statistically significant improvement in bPFS. Comprehensive regional sampling of the prostate may enhance diagnostic accuracy compared to a standard biopsy scheme, resulting in better tailoring of therapy.« less
Elbashir, Ahmed B; Abdelbagi, Azhari O; Hammad, Ahmed M A; Elzorgani, Gafar A; Laing, Mark D
2015-03-01
Ninety-six human blood samples were collected from six locations that represent areas of intensive pesticide use in Sudan, which included irrigated cotton schemes (Wad Medani, Hasaheesa, Elmanagil, and Elfaw) and sugarcane schemes (Kenana and Gunaid). Blood samples were analyzed for organochlorine pesticide residues by gas liquid chromatography (GLC) equipped with an electron capture detector (ECD). Residues of p,p'-dichlorodiphenyldichloroethylene (DDE), heptachlor epoxide, γ-HCH, and dieldrin were detected in blood from all locations surveyed. Aldrin was not detected in any of the samples analyzed, probably due to its conversion to dieldrin. The levels of total organochlorine burden detected were higher in the blood from people in the irrigated cotton schemes (mean 261 ng ml(-1), range 38-641 ng ml(-1)) than in the blood of people from the irrigated sugarcane schemes (mean 204 ng ml(-1), range 59-365 ng ml(-1)). The highest levels of heptachlor epoxide (170 ng ml(-1)) and γ-HCH (92 ng ml(-1)) were observed in blood samples from Hasaheesa, while the highest levels of DDE (618 ng ml(-1)) and dieldrin (82 ng ml(-1)) were observed in blood samples from Wad Medani and Kenana, respectively. The organochlorine levels in blood samples seemed to decrease with increasing distance from the old irrigated cotton schemes (Wad Medani, Hasaheesa, and Elmanagil) where the heavy application of these pesticides took place historically.
Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.
Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua
2016-09-05
In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
Internet Usage Habits and Cyberbullying Related Opinions of Secondary School Students
ERIC Educational Resources Information Center
Sentürk, Sener; Bayat, Seher
2016-01-01
The purpose of this research is to examine the internet usage habits of secondary school students and their awareness of cyberbullying in terms of different variables. Of the probabilistic sampling methods, research sampling identified by stratified sampling method has been formed by 559 students from two branches (56 branches in total) selected…
RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation
D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen
1980-01-01
Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.
77 FR 2697 - Proposed Information Collection; Comment Request; Annual Services Report
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-19
... and from a sample of small- and medium-sized businesses selected using a stratified sampling procedure... be canvassed when the sample is re-drawn, while nearly all of the small- and medium-sized firms from...); Educational Services (NAICS 61); Health Care and Social Assistance (NAICS 62); Arts, Entertainment, and...
ERIC Educational Resources Information Center
Gelen, Ismail; Onay, Ihsan; Varol, Volkan
2014-01-01
The purpose of this study is to examine the efficiency of "Educational Club Practices" that has been in Elementary School program since 2005-2006, by examining the attitudes of students about "Educational Club Practices". Sample was selected in two steps. First, stratified sampling was employed and then random sampling was…
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Cost-effectiveness of risk stratified followup after urethral reconstruction: a decision analysis.
Belsante, Michael J; Zhao, Lee C; Hudak, Steven J; Lotan, Yair; Morey, Allen F
2013-10-01
We propose a novel risk stratified followup protocol for use after urethroplasty and explore potential cost savings. Decision analysis was performed comparing a symptom based, risk stratified protocol for patients undergoing excision and primary anastomosis urethroplasty vs a standard regimen of close followup for urethroplasty. Model assumptions included that excision and primary anastomosis has a 94% success rate, 11% of patients with successful urethroplasty had persistent lower urinary tract symptoms requiring cystoscopic evaluation, patients in whom treatment failed undergo urethrotomy and patients with recurrence on symptom based surveillance have a delayed diagnosis requiring suprapubic tube drainage. The Nationwide Inpatient Sample from 2010 was queried to identify the number of urethroplasties performed per year in the United States. Costs were obtained based on Medicare reimbursement rates. The 5-year cost of a symptom based, risk stratified followup protocol is $430 per patient vs $2,827 per patient using standard close followup practice. An estimated 7,761 urethroplasties were performed in the United States in 2010. Assuming that 60% were excision and primary anastomosis, and with more than 5 years of followup, the risk stratified protocol was projected to yield an estimated savings of $11,165,130. Sensitivity analysis showed that the symptom based, risk stratified followup protocol was far more cost-effective than standard close followup in all settings. Less than 1% of patients would be expected to have an asymptomatic recurrence using the risk stratified followup protocol. A risk stratified, symptom based approach to urethroplasty followup would produce a significant reduction in health care costs while decreasing unnecessary followup visits, invasive testing and radiation exposure. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Numerical Solution of Dyson Brownian Motion and a Sampling Scheme for Invariant Matrix Ensembles
NASA Astrophysics Data System (ADS)
Li, Xingjie Helen; Menon, Govind
2013-12-01
The Dyson Brownian Motion (DBM) describes the stochastic evolution of N points on the line driven by an applied potential, a Coulombic repulsion and identical, independent Brownian forcing at each point. We use an explicit tamed Euler scheme to numerically solve the Dyson Brownian motion and sample the equilibrium measure for non-quadratic potentials. The Coulomb repulsion is too singular for the SDE to satisfy the hypotheses of rigorous convergence proofs for tamed Euler schemes (Hutzenthaler et al. in Ann. Appl. Probab. 22(4):1611-1641, 2012). Nevertheless, in practice the scheme is observed to be stable for time steps of O(1/ N 2) and to relax exponentially fast to the equilibrium measure with a rate constant of O(1) independent of N. Further, this convergence rate appears to improve with N in accordance with O(1/ N) relaxation of local statistics of the Dyson Brownian motion. This allows us to use the Dyson Brownian motion to sample N× N Hermitian matrices from the invariant ensembles. The computational cost of generating M independent samples is O( MN 4) with a naive scheme, and O( MN 3log N) when a fast multipole method is used to evaluate the Coulomb interaction.
J. L. Willers; J. M. McKinion; J. N. Jenkins
2006-01-01
Simulation was employed to create stratified simple random samples of different sample unit sizes to represent tarnished plant bug abundance at different densities within various habitats of simulated cotton fields. These samples were used to investigate dispersion patterns of this cotton insect. It was found that the assessment of spatial pattern varied as a function...
Spectral reflectance of surface soils: Relationships with some soil properties
NASA Technical Reports Server (NTRS)
Kiesewetter, C. H.
1983-01-01
Using a published atlas of reflectance curves and physicochemical properties of soils, a statistical analysis was carried out. Reflectance bands which correspond to five of the wavebands used by NASA's Thematic Mapper were examined for relationships to specific soil properties. The properties considered in this study include: Sand Content, Silt Content, Clay Content, Organic Matter Content, Cation Exchange Capacity, Iron Oxide Content and Moisture Content. Regression of these seven properties on the mean values of five TM bands produced results that indicate that the predictability of the properties can be increased by stratifying the data. The data was stratified by parent material, taxonomic order, temperature zone, moisture zone and climate (combined temperature and moisture). The best results were obtained when the sample was examined by climatic classes. The middle Infra-red bands, 5 and 7, as well as the visible bands, 2 and 3, are significant in the model. The near Infra-red band, band 4, is almost as useful and should be included in any studies. General linear modeling procedures examined relationships of the seven properties with certain wavebands in the stratified samples.
Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai
2014-11-10
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.
In-hospital fall-risk screening in 4,735 geriatric patients from the LUCAS project.
Neumann, L; Hoffmann, V S; Golgert, S; Hasford, J; Von Renteln-Kruse, W
2013-03-01
In-hospital falls in older patients are frequent, but the identification of patients at risk of falling is challenging. Aim of this study was to improve the identification of high-risk patients. Therefore, a simplified screening-tool was developed, validated, and compared to the STRATIFY predictive accuracy. Retrospective analysis of 4,735 patients; evaluation of predictive accuracy of STRATIFY and its single risk factors, as well as age, gender and psychotropic medication; splitting the dataset into a learning and a validation sample for modelling fall-risk screening and independent, temporal validation. Geriatric clinic at an academic teaching hospital in Hamburg, Germany. 4,735 hospitalised patients ≥65 years. Sensitivity, specificity, positive and negative predictive value, Odds Ratios, Youden-Index and the rates of falls and fallers were calculated. There were 10.7% fallers, and the fall rate was 7.9/1,000 hospital days. In the learning sample, mental alteration (OR 2.9), fall history (OR 2.1), and insecure mobility (Barthel-Index items 'transfer' + 'walking' score = 5, 10 or 15) (OR 2.3) had the most strongest association to falls. The LUCAS Fall-Risk Screening uses these risk factors, and patients with ≥2 risk factors contributed to the high-risk group (30.9%). In the validation sample, STRATIFY SENS was 56.8, SPEC 59.6, PPV 13.5 and NPV 92.6 vs. LUCAS Fall-Risk Screening was SENS 46.0, SPEC 71.1, PPV 14.9 and NPV 92.3. Both the STRATIFY and the LUCAS Fall-Risk Screening showed comparable results in defining a high-risk group. Impaired mobility and cognitive status were closely associated to falls. The results do underscore the importance of functional status as essential fall-risk factor in older hospitalised patients.
Stratified sampling design based on data mining.
Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung
2013-09-01
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.
2014-01-01
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406
Salivary hormone and immune responses to three resistance exercise schemes in elite female athletes.
Nunes, João A; Crewther, Blair T; Ugrinowitsch, Carlos; Tricoli, Valmor; Viveiros, Luís; de Rose, Dante; Aoki, Marcelo S
2011-08-01
This study examined the salivary hormone and immune responses of elite female athletes to 3 different resistance exercise schemes. Fourteen female basketball players each performed an endurance scheme (ES-4 sets of 12 reps, 60% of 1 repetition maximum (1RM) load, 1-minute rest periods), a strength-hypertrophy scheme (SHS-1 set of 5RM, 1 set of 4RM, 1 set of 3RM, 1 set of 2RM, and 1set of 1RM with 3-minute rest periods, followed by 3 sets of 10RM with 2-minute rest periods) and a power scheme (PS-3 sets of 10 reps, 50% 1RM load, 3-minute rest periods) using the same exercises (bench press, squat, and biceps curl). Saliva samples were collected at 07:30 hours, pre-exercise (Pre) at 09:30 hours, postexercise (Post), and at 17:30 hours. Matching samples were also taken on a nonexercising control day. The samples were analyzed for testosterone, cortisol (C), and immunoglobulin A concentrations. The total volume of load lifted differed among the 3 schemes (SHS > ES > PS, p < 0.05). Postexercise C concentrations increased after all schemes, compared to control values (p < 0.05). In the SHS, the postexercise C response was also greater than pre-exercise data (p < 0.05). The current findings confirm that high-volume resistance exercise schemes can stimulate greater C secretion because of higher metabolic demand. In terms of practical applications, acute changes in C may be used to evaluate the metabolic demands of different resistance exercise schemes, or as a tool for monitoring training strain.
NASA Astrophysics Data System (ADS)
Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui
2018-04-01
A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.
Hagen, Wim J H; Wan, William; Briggs, John A G
2017-02-01
Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Sampling strategies for efficient estimation of tree foliage biomass
Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson
2011-01-01
Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...
NASA Technical Reports Server (NTRS)
Vennes, Stephane; Fontaine, Gilles
1992-01-01
A grid of stratified H/He model atmospheres applicable to the interpretation of the spectral properties of hot H-rich white dwarfs (WDs) is computed. Samples of hot DA WDs observed with Exosat and Einstein are analyzed using the models. Six out of six objects with T(eff) = 35,000 K or less do not show a EUV/soft X-ray flux deficiency and therefore can be understood solely in terms of pure hydrogen atmospheres. A majority of DA WDs hotter than this value do show a flux deficiency and thus require the presence of some absorbers in their atmospheres. It is shown that the Exosat broadband photometry of Feige 24 and G191 B2B cannot be explained in terms of stratified atmospheres. Absorption by heavy elements is certainly responsible for the required EUV/soft X-ray opacity source in these cases. However, the Exosat data are consistent with the hypothesis of stratified atmospheres in the four remaining objects.
Rosén, Per; Karlberg, Ingvar
2002-06-01
To compare the views of citizens and health-care decision-makers on health-care financing, the limits of public health-care, and resource allocation. A postal survey based on a randomized sample of adults taken by the national registration and stratified samples of health-care politicians, administrators, and doctors in five Swedish counties. A total number of 1194 citizens (response rate 60%) and 427 decision-makers (response rate 69%). The general public have high expectations of public health-care, expectations that do not fit with the decision-makers' views on what should be offered. To overcome the discrepancy between demand and resources, physicians prefer increased patient fees and complementary private insurance schemes to a higher degree than do the other respondents. Physicians take a more favourable view of letting politicians on a national level exert a greater influence on resource allocation within public health-care. A majority of physicians want politicians to assume a greater responsibility for the exclusion of certain therapies or diagnoses. Most politicians, on the other hand, prefer physicians to make more rigorous decisions as to which medical indications should entitle a person to public health-care. The gap between public expectations and health-care resources makes it more important to be clear about who should be accountable for resource-allocation decisions in public health-care. Significant differences between physicians' and politicians' opinions on financing and responsibility for prioritization make the question of accountability even more important.
Comparing "insider" and "outsider" news coverage of the 2014 Ebola outbreak.
Humphries, Brittany; Radice, Martha; Lauzier, Sophie
2017-11-09
Information provided by news media during an infectious disease outbreak can affect the actions taken to safeguard public health. There has been little evaluation of how the content of news published during an outbreak varies by location of the news outlet. This study analyzes coverage of the 2014 Ebola outbreak by one news outlet operating within a country affected by the outbreak and one country not directly affected. A qualitative content analysis was conducted of articles published in two national news outlets, The Globe and Mail (Canada) and the Vanguard (Nigeria), between January 1 and December 31, 2014. Articles available through LexisNexis Academic were sorted by date and sampled using a stratified sampling method (The Globe and Mail n = 100; Vanguard n = 105). A coding scheme was developed and modified to incorporate emerging themes until saturation was achieved. There were substantial differences in outbreak coverage in terms of the topic and content of the articles, as well as the sources consulted. The Globe and Mail framed the outbreak in terms of national security and national interests, as well as presenting it as an international humanitarian crisis. In contrast, the Vanguard framed the outbreak almost exclusively in terms of public health. Our findings highlight how different geographic contexts can shape reporting on the same event. Further research is required to investigate how the political, social or economic situations of a country shape its news media, potentially influencing actions taken to control disease outbreaks.
Development and testing of a scale for assessing the quality of home nursing.
Chiou, Chii-Jun; Wang, Hsiu-Hung; Chang, Hsing-Yi
2016-03-01
To develop a home nursing quality scale and to evaluate its psychometric properties. This was a 3-year study. In the first year, 19 focus group interviews with caregivers of people using home nursing services were carried out in northern, central and southern Taiwan. Content analysis was carried out and a pool of questionnaire items compiled. In the second year (2007), study was carried out on a stratified random sample selected from home nursing organizations covered by the national health insurance scheme in southern Taiwan. The study population was the co-resident primary caregivers of home care nursing service users. Item analysis and exploratory factor analysis were carried out on data from 365 self-administered questionnaires collected from 13 selected home care organizations. In the third year (2008), a random sample of participants was selected from 206 hospital-based home care nursing organizations throughout Taiwan, resulting in completion of 294 questionnaires from 27 organizations. Confirmatory factor analysis was then carried out on the scale, and the validity and reliability of the scale assessed. The present study developed a reliable and valid home nursing quality scale from the perspective of users of home nursing services. The scale comprised three factors: dependability, communication skills and service usefulness. This scale is of practical value for the promotion of long-term community care aging in local policies. The scale is ready to be used to assess the quality of services provided by home care nursing organizations. © 2015 Japan Geriatrics Society.
Cross sectional study of young people's awareness of and involvement with tobacco marketing
MacFadyen, Lynn; Hastings, Gerard; MacKintosh, Anne Marie
2001-01-01
Objectives To examine young people's awareness of and involvement with tobacco marketing and to determine the association, if any, between this and their smoking behaviour. Design Cross sectional, quantitative survey, part interview and part self completion, administered in respondents' homes. Setting North east England. Participants Stratified random sample of 629 young people aged 15 and 16 years who had “opted in” to research through a postal consent procedure. Results There was a high level of awareness of and involvement in tobacco marketing among the 15-16 year olds sampled in the study: around 95% were aware of advertising and all were aware of some method of point of sale marketing. Awareness of and involvement with tobacco marketing were both significantly associated with being a smoker: for example, 30% (55/185) of smokers had received free gifts through coupons in cigarette packs, compared with 11% (21/199) of non-smokers (P<0.001). When other factors known to be linked with teenage smoking were held constant, awareness of coupon schemes, brand stretching, and tobacco marketing in general were all independently associated with current smoking status. Conclusions Teenagers are aware of, and are participating in, many forms of tobacco marketing, and both awareness and participation are associated with current smoking status. This suggests that the current voluntary regulations designed to protect young people from smoking are not working, and that statutory regulations are required. PMID:11230063
The linear trend of headache prevalence and some headache features in school children.
Ozge, Aynur; Buğdayci, Resul; Saşmaz, Tayyar; Kaleağasi, Hakan; Kurt, Oner; Karakelle, Ali; Siva, Aksel
2007-04-01
The objectives of this study were to determine the age and sex dependent linear trend of recurrent headache prevalence in schoolchildren in Mersin. A stratified sample composed of 5562 children; detailed characteristics were previously published. In this study the prevalence distribution of headache by age and sex showed a peak in the female population at the age of 11 (27.2%) with a plateau in the following years. The great stratified random sample results suggested that, in addition to socio-demographic features, detailed linear trend analysis showed headache features of children with headache have some specific characteristics dependent on age, gender and headache type. This study results can constitute a basis for the future epidemiological based studies.
Williams, John H.; Taylor, Larry E.; Low, Dennis J.
1998-01-01
The most important sources of groundwater in Bradford, Tioga, and Potter Counties are the stratified-drift aquifers. Saturated sand and gravel primarily of outwash origin forms extensive unconfined aquifers in the valleys. Outwash is underlain in most major valleys by silt, clay, and very fine sand of lacustrine origin that comprise extensive confining units. The lacustrine confining units locally exceed 100 feet in thickness. Confined aquifers of ice-contact sand and gravel are buried locally beneath the lacustrine deposits. Bedrock and till are the basal confining units of the stratifies-drift aquifer systems. Recharge to the stratified-drift aquifers if by direct infiltration of precipitation, tributary-stream infiltration, infiltration of unchanneled runoff at the valley walls, and groundwater inflow from the bedrock and till uplands. Valley areas underlain by superficial sand and gravel contribute about 1 million gallons per day per square mile of water from precipitation to the aquifers. Tributary streams provide recharge of nearly 590 gallons per day per foot of stream reach. Water is added at the rate of 1 million gallons per day per square mile of bordering uplands not drained by tributary streams to the stratified-drift aquifers from unchanneled runoff and groundwater inflow. Induced infiltration can be a major source of recharge to well fields completed in unconfined stratified-drift aquifers that are in good hydraulic connection with surface water. The well fields of an industrial site in North Towanda, a public-water supplier at Tioga Point, and the U.S. Fish and Wildlife Service at Asaph accounted for 75 percent of the 10.8 million gallons per day pf groundwater withdrawn by public suppliers and other selected users in 1985. The well fields tap stratified-drift aquifers that are substantially recharged by induced infiltration or tributary-stream infiltration. Specific-capacity data from 95 wells indicate that most wells completed in stratified-drift aquifers have specific capacities an order of magnitude greater than those completed in till and bedrock, Wells completed in unconfined stratified-drift aquifers and in bedrock aquifers have the highest and lowest median specific capacities -- 24 and 0.80 gallons per minute per foot of drawdown, respectively. Wells completed in confined stratified-drift aquifers and in till have median specific capacties of 11 and 0.87 gallons per minute per foot of drawdown, respectively. The results of 223 groundwater-quality analyses indicate two major hydrogeochemical zones: (1) a zone of unrestricted groundwater flow that contains water of the calcium bicarbonate type (this zone is found in almost all of the stratified-drift aquifers, till, and shallow bedrock systems); and (2) a zone of restricted groundwater slow that contains water of the sodium chloride type (this zone is found in the bedrock, and, in some areas, in till and confined stratified-drift aquifers). Samples pumped from wells that penetrate restricted-flow zones have median concentrations of total dissolved solids, dissolved chloride, and dissolved barium of 840 and 350 milligrams per liter, and 2,100 micrograms per liter, respectively. Excessive concentrations of iron and manganese are common in the groundwater of the study area; about 50 percent of the wells sampled contain water that has iron and manganese concentrations that exceed the U.S. Environmental Protection Agency secondary maximum contaminant levels of 300 and 50 micrograms per liter, respectively. Only water in the unconfined stratified-drift aquifers and the Catskill Formation has median concentrations lower than these limits.
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Large Sample Confidence Limits for Goodman and Kruskal's Proportional Prediction Measure TAU-b
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W.
1976-01-01
A Fortran Extended program which computes Goodman and Kruskal's Tau-b, its asymmetrical counterpart, Tau-a, and three sets of confidence limits for each coefficient under full multinomial and proportional stratified sampling is presented. A correction of an error in the calculation of the large sample standard error of Tau-b is discussed.…
Detection and monitoring of invasive exotic plants: a comparison of four sampling methods
Cynthia D. Huebner
2007-01-01
The ability to detect and monitor exotic invasive plants is likely to vary depending on the sampling method employed. Methods with strong qualitative thoroughness for species detection often lack the intensity necessary to monitor vegetation change. Four sampling methods (systematic plot, stratified-random plot, modified Whittaker, and timed meander) in hemlock and red...
Photo stratification improves northwest timber volume estimates.
Colin D. MacLean
1972-01-01
Data from extensive timber inventories of 12 counties in western and central Washington were analyzed to test the relative efficiency of double sampling for stratification as a means of estimating total volume. Photo and field plots, when combined in a stratified sampling design, proved about twice as efficient as simple field sampling. Although some gains were made by...
ERIC Educational Resources Information Center
Pan, Yi-Hsiang
2014-01-01
The purpose of this study was to confirm the relationships among teachers' self-efficacy, and students' learning motivation, learning atmosphere, and learning satisfaction in senior high school physical education (PE). A sample of 462 PE teachers and 2681 students was drawn using stratified random sampling and cluster sampling from high schools in…
ERIC Educational Resources Information Center
Zullig, Keith J.; Ward, Rose Marie; King, Keith A.; Patton, Jon M.; Murray, Karen A.
2009-01-01
The purpose of this investigation was to assess the reliability and validity of eight developmental asset measures among a stratified, random sample (N = 540) of college students to guide health promotion efforts. The sample was randomly split to produce exploratory and confirmatory samples for factor analysis using principal axis factoring and…
Workforce Readiness: A Study of University Students' Fluency with Information Technology
ERIC Educational Resources Information Center
Kaminski, Karen; Switzer, Jamie; Gloeckner, Gene
2009-01-01
This study with data collected from a large sample of freshmen in 2001 and a random stratified sample of seniors in 2005 examined students perceived FITness (fluency with Information Technology). In the fall of 2001 freshmen at a medium sized research-one institution completed a survey and in spring 2005 a random sample of graduating seniors…
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
The personal and workplace characteristics of uninsured expatriate males in Saudi Arabia.
Alkhamis, Abdulwahab; Cosgrove, Peter; Mohamed, Gamal; Hassan, Amir
2017-01-19
A major concern by the health decision makers in Gulf Cooperative Council (GCC) countries is the burden of financing healthcare. While other GCC countries have been examining different options, Saudi Arabia has endeavoured to reform its private healthcare system and control expatriate access to government resources through the provision of Compulsory Employment-Based Health Insurance (CEBHI). The objective of this research was to investigate, in a natural setting, the characteristics of uninsured expatriates based on their personal and workplace characteristics. Using a cross-sectional survey, data were collected from a sample of 4,575 male expatriate employees using a multi-stage stratified cluster sampling technique. Descriptive statistics were used to summarize all variables, and the dependent variable was tabulated by access to health insurance and tested using Chi-square. Logistic analysis was performed, guided by the conceptual model. Of survey respondents, 30% were either uninsured or not yet enrolled in a health insurance scheme, 79.4% of these uninsured expatriates did not have valid reasons for being uninsured, with Iqama renewal accounting for 20.6% of the uninsured. The study found both personal and workplace characteristics were important factors influencing health insurance status. Compared with single expatriates, married expatriates (accompanied by their families) are 30% less likely to be uninsured. Moreover, workers occupying technical jobs requiring high school level of education or above were two-thirds more likely to be insured compared to unskilled workers. With regard to firm size, respondents employed in large companies (more than 50 employees) are more likely to be insured compared to those employed in small companies (less than ten employees). In relation to business type, the study found that compared to workers from the agricultural sector, industrial/manufacturing, construction and trading sectors, workers were, respectively, 76%, 85%, and 60% less likely to be uninsured. Although the CEBHI is mandatory, this study found that the characteristics of uninsured expatriates, in respect of their personal and workplace characteristics have similarities with the uninsured from other private employment-sponsored health insurance schemes. Other factors influencing access to health insurance, besides employee and workplace characteristics, include the development and extent of the country's insurance industry.
Devadasan, Narayanan; Seshadri, Tanya; Trivedi, Mayur; Criel, Bart
2013-08-20
India's health expenditure is met mostly by households through out-of-pocket (OOP) payments at the time of illness. To protect poor families, the Indian government launched a national health insurance scheme (RSBY). Those below the national poverty line (BPL) are eligible to join the RSBY. The premium is heavily subsidised by the government. The enrolled members receive a card and can avail of free hospitalisation care up to a maximum of US$ 600 per family per year. The hospitals are reimbursed by the insurance companies. The objective of our study was to analyse the extent to which RSBY contributes to universal health coverage by protecting families from making OOP payments. A two-stage stratified sampling technique was used to identify eligible BPL families in Patan district of Gujarat, India. Initially, all 517 villages were listed and 78 were selected randomly. From each of these villages, 40 BPL households were randomly selected and a structured questionnaire was administered. Interviews and discussions were also conducted among key stakeholders. Our sample contained 2,920 households who had enrolled in the RSBY; most were from the poorer sections of society. The average hospital admission rate for the period 2010-2011 was 40/1,000 enrolled. Women, elderly and those belonging to the lowest caste had a higher hospitalisation rate. Forty four per cent of patients who had enrolled in RSBY and had used the RSBY card still faced OOP payments at the time of hospitalisation. The median OOP payment for the above patients was US$ 80 (interquartile range, $16-$200) and was similar in both government and private hospitals. Patients incurred OOP payments mainly because they were asked to purchase medicines and diagnostics, though the same were included in the benefit package. While the RSBY has managed to include the poor under its umbrella, it has provided only partial financial coverage. Nearly 60% of insured and admitted patients made OOP payments. We plea for better monitoring of the scheme and speculate that it is possible to enhance effective financial coverage of the RSBY if the nodal agency at state level would strengthen its stewardship and oversight functions.
This draft report is a preliminary assessment that describes how biological indicators are likely to respond to climate change, how well current sampling schemes may detect climate-driven changes, and how likely it is that these sampling schemes will continue to detect impairment...
Corrections to the General (2,4) and (4,4) FDTD Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Smith, William S.; Shao, Xuan-Min
The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using Mathematica TM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided formore » both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.« less
Physical Activity among Rural Older Adults with Diabetes
ERIC Educational Resources Information Center
Arcury, Thomas A.; Snively, Beverly M.; Bell, Ronny A.; Smith, Shannon L.; Stafford, Jeanette M.; Wetmore-Arkader, Lindsay K.; Quandt, Sara A.
2006-01-01
Purpose: This analysis describes physical activity levels and factors associated with physical activity in an ethnically diverse (African American, Native American, white) sample of rural older adults with diabetes. Method: Data were collected using a population-based, cross-sectional stratified random sample survey of 701 community-dwelling…
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
NASA Astrophysics Data System (ADS)
Eva, Hugh; Carboni, Silvia; Achard, Frédéric; Stach, Nicolas; Durieux, Laurent; Faure, Jean-François; Mollicone, Danilo
A global systematic sampling scheme has been developed by the UN FAO and the EC TREES project to estimate rates of deforestation at global or continental levels at intervals of 5 to 10 years. This global scheme can be intensified to produce results at the national level. In this paper, using surrogate observations, we compare the deforestation estimates derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories. We also report the precisions that are achieved due to sampling errors and, in the case of French Guiana, compare such precision with the official inventory precision. We extract nine sample data sets from the official wall-to-wall deforestation map derived from satellite interpretations produced for the Brazilian Amazon for the year 2002 to 2003. This global sampling scheme estimate gives 2.81 million ha of deforestation (mean from nine simulated replicates) with a standard error of 0.10 million ha. This compares with the full population estimate from the wall-to-wall interpretations of 2.73 million ha deforested, which is within one standard error of our sampling test estimate. The relative difference between the mean estimate from sampling approach and the full population estimate is 3.1%, and the standard error represents 4.0% of the full population estimate. This global sampling is then intensified to a territorial level with a case study over French Guiana to estimate deforestation between the years 1990 and 2006. For the historical reference period, 1990, Landsat-5 Thematic Mapper data were used. A coverage of SPOT-HRV imagery at 20 m × 20 m resolution acquired at the Cayenne receiving station in French Guiana was used for year 2006. Our estimates from the intensified global sampling scheme over French Guiana are compared with those produced by the national authority to report on deforestation rates under the Kyoto protocol rules for its overseas department. The latter estimates come from a sample of nearly 17,000 plots analyzed from same spatial imagery acquired between year 1990 and year 2006. This sampling scheme is derived from the traditional forest inventory methods carried out by IFN (Inventaire Forestier National). Our intensified global sampling scheme leads to an estimate of 96,650 ha deforested between 1990 and 2006, which is within the 95% confidence interval of the IFN sampling scheme, which gives an estimate of 91,722 ha, representing a relative difference from the IFN of 5.4%. These results demonstrate that the intensification of the global sampling scheme can provide forest area change estimates close to those achieved by official forest inventories (<6%), with precisions of between 4% and 7%, although we only estimate errors from sampling, not from the use of surrogate data. Such methods could be used by developing countries to demonstrate that they are fulfilling requirements for reducing emissions from deforestation in the framework of an REDD (Reducing Emissions from Deforestation in Developing Countries) mechanism under discussion within the United Nations Framework Convention on Climate Change (UNFCCC). Monitoring systems at national levels in tropical countries can also benefit from pan-tropical and regional observations, to ensure consistency between different national monitoring systems.
Gama, Gabriela Lopes; Larissa, Coutinho de Lucena; Brasileiro, Ana Carolina de Azevedo Lima; Silva, Emília Márcia Gomes de Souza; Galvão, Élida Rayanne Viana Pinheiro; Maciel, Álvaro Cavalcanti; Lindquist, Ana Raquel Rodrigues
2017-07-01
Studies that evaluate gait rehabilitation programs for individuals with stroke often consider time since stroke of more than six months. In addition, most of these studies do not use lesion etiology or affected cerebral hemisphere as study factors. However, it is unknown whether these factors are associated with post-stroke motor performance after the spontaneous recovery period. To investigate whether time since stroke onset, etiology, and lesion side is associated with spatiotemporal and angular gait parameters of individuals with chronic stroke. Fifty individuals with chronic hemiparesis (20 women) were evaluated. The sample was stratified according to time since stroke (between 6 and 12 months, between 13 and 36 months, and over 36 months), affected cerebral hemisphere (left or right) and lesion etiology (ischemic and hemorrhagic). The participants were evaluated during overground walking at self-selected gait speed, and spatiotemporal and angular gait parameters were calculated. Results Differences between gait speed, stride length, hip flexion, and knee flexion were observed in subgroups stratified based on lesion etiology. Survivors of a hemorrhagic stroke exhibited more severe gait impairment. Subgroups stratified based on time since stroke only showed intergroup differences for stride length, and subgroups stratified based on affected cerebral hemisphere displayed between-group differences for swing time symmetry ratio. In order to recruit a more homogeneous sample, more accurate results were obtained and an appropriate rehabilitation program was offered, researchers and clinicians should consider that gait pattern might be associated with time since stroke, affected cerebral hemisphere and lesion etiology.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh
2011-06-01
This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.
Presence of psychoactive substances in oral fluid from randomly selected drivers in Denmark.
Simonsen, K Wiese; Steentoft, A; Hels, T; Bernhoft, I M; Rasmussen, B S; Linnet, K
2012-09-10
This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n=3002) were collected randomly from drivers using a sampling scheme stratified by time, season, and road type. The oral fluid samples were screened for 29 illegal and legal psychoactive substances and metabolites as well as ethanol. Fourteen (0.5%) drivers were positive for ethanol (alone or in combination with drugs) at concentrations above 0.53g/l, which is the Danish legal limit. The percentage of drivers positive for medicinal drugs above the Danish legal concentration limit was 0.4%; while, 0.3% of the drivers tested positive for one or more illicit drug at concentrations exceeding the Danish legal limit. Tetrahydrocannabinol, cocaine, and amphetamine were the most frequent illicit drugs detected above the limit of quantitation (LOQ); while, codeine, tramadol, zopiclone, and benzodiazepines were the most frequent legal drugs. Middle aged men (median age 47.5 years) dominated the drunk driving group, while the drivers positive for illegal drugs consisted mainly of young men (median age 26 years). Middle aged women (median age 44.5 years) often tested positive for benzodiazepines at concentrations exceeding the legal limits. Interestingly, 0.6% of drivers tested positive for tramadol, at concentrations above the DRUID cut off; although, tramadol is not included in the Danish list of narcotic drugs. It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving. Copyright © 2012. Published by Elsevier Ireland Ltd.
Laboratory-based observations of capillary barriers and preferential flow in layered snow
NASA Astrophysics Data System (ADS)
Avanzi, F.; Hirashima, H.; Yamaguchi, S.; Katsushima, T.; De Michele, C.
2015-12-01
Several evidences are nowadays available that show how the effects of capillary gradients and preferential flow on water transmission in snow may play a more important role than expected. To observe these processes and to contribute in their characterization, we performed observations on the development of capillary barriers and preferential flow patterns in layered snow during cold laboratory experiments. We considered three different layering (all characterized by a finer-over-coarser texture in grain size) and three different water input rates. Nine samples of layered snow were sieved in a cold laboratory, and subjected to a constant supply of dyed tracer. By means of visual inspection, horizontal sectioning and liquid water content measurements, the processes of ponding and preferential flow were characterized as a function of texture and water input rate. The dynamics of each sample were replicated using the multi-layer physically-based SNOWPACK model. Results show that capillary barriers and preferential flow are relevant processes ruling the speed of liquid water in stratified snow. Ponding is associated with peaks in LWC at the boundary between the two layers equal to ~ 33-36 vol. % when the upper layer is composed by fine snow (grain size smaller than 0.5 mm). The thickness of the ponding layer at the textural boundary is between 0 and 3 cm, depending on sample stratigraphy. Heterogeneity in water transmission increases with grain size, while we do not observe any clear dependency on water input rate. The extensive comparison between observed and simulated LWC profiles by SNOWPACK (using an approximation of Richards Equation) shows high performances by the model in estimating the LWC peak over the boundary, while water speed in snow is underestimated by the chosen water transport scheme.
Exploring Sampling in the Detection of Multicategory EEG Signals
Siuly, Siuly; Kabir, Enamul; Wang, Hua; Zhang, Yanchun
2015-01-01
The paper presents a structure based on samplings and machine leaning techniques for the detection of multicategory EEG signals where random sampling (RS) and optimal allocation sampling (OS) are explored. In the proposed framework, before using the RS and OS scheme, the entire EEG signals of each class are partitioned into several groups based on a particular time period. The RS and OS schemes are used in order to have representative observations from each group of each category of EEG data. Then all of the selected samples by the RS from the groups of each category are combined in a one set named RS set. In the similar way, for the OS scheme, an OS set is obtained. Then eleven statistical features are extracted from the RS and OS set, separately. Finally this study employs three well-known classifiers: k-nearest neighbor (k-NN), multinomial logistic regression with a ridge estimator (MLR), and support vector machine (SVM) to evaluate the performance for the RS and OS feature set. The experimental outcomes demonstrate that the RS scheme well represents the EEG signals and the k-NN with the RS is the optimum choice for detection of multicategory EEG signals. PMID:25977705
Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations
NASA Astrophysics Data System (ADS)
Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.
2017-12-01
Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.
ERTS-1 data applications to Minnesota forest land use classification
NASA Technical Reports Server (NTRS)
Sizer, J. E. (Principal Investigator); Eller, R. G.; Meyer, M. P.; Ulliman, J. J.
1973-01-01
The author has identified the following significant results. Color-combined ERTS-1 MSS spectral slices were analyzed to determine the maximum (repeatable) level of meaningful forest resource classification data visually attainable by skilled forest photointerpreters for the following purposes: (1) periodic updating of the Minnesota Land Management Information System (MLMIS) statewide computerized land use data bank, and (2) to provide first-stage forest resources survey data for large area forest land management planning. Controlled tests were made of two forest classification schemes by experienced professional foresters with special photointerpretation training and experience. The test results indicate it is possible to discriminate the MLMIS forest class from the MLMIS nonforest classes, but that it is not possible, under average circumstances, to further stratify the forest classification into species components with any degree of reliability with ERTS-1 imagery. An ongoing test of the resulting classification scheme involves the interpretation, and mapping, of the south half of Itasca County, Minnesota, with ERTS-1 imagery. This map is undergoing field checking by on the ground field cooperators, whose evaluation will be completed in the fall of 1973.
Training set optimization under population structure in genomic selection
USDA-ARS?s Scientific Manuscript database
The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...
Does Marital Status Influence the Parenting Styles Employed by Parents?
ERIC Educational Resources Information Center
Ashiono, Benard Litali; Mwoma, Teresa B.
2015-01-01
The current study sought to establish whether parents' marital status, influence their use of specific parenting styles in Kisauni District, Kenya. A correlational research design was employed to carry out this study. Stratified sampling technique was used to select preschools while purposive sampling technique was used to select preschool…
A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) a collaborative study between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal weighting was u...
Perceptions of Learning among Swiss Watch Managers
ERIC Educational Resources Information Center
Tajeddini, Kayhan
2009-01-01
Purpose: This paper aims to explore managers' perceptions of learning within a sample of Swiss watch firms. Design/methodology/approach: A purposeful (judgmental) stratified sampling method was employed, where in-depth interviews with 13 marketing managers and owners were carried out over a three-month period. Meaning units (MUs) were abstracted,…
Employee Engagement and Performance of Lecturers in Nigerian Tertiary Institutions
ERIC Educational Resources Information Center
Agbionu, Uchenna Clementina; Anyalor, Maureen; Nwali, Anthony Chukwuma
2018-01-01
The study investigated employee engagement and performance of lecturers in Nigerian Tertiary Institutions. It employed descriptive and correlation research designs. Stratified random sampling was used to select three tertiary institutions in Nigeria and the sample size of 314 lecturers was obtained through Taro Yamane. Questionnaires were…
A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) as part of collaborative research between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal wei...
Resource Utilisation and Curriculum Implementation in Community Colleges in Kenya
ERIC Educational Resources Information Center
Kigwilu, Peter Changilwa; Akala, Winston Jumba
2017-01-01
The study investigated how Catholic-sponsored community colleges in Nairobi utilise the existing physical facilities and teaching and learning resources for effective implementation of Artisan and Craft curricula. The study adopted a mixed methods research design. Proportional stratified random sampling was used to sample 172 students and 18…
Associations among Adolescent Risk Behaviours and Self-Esteem in Six Domains
ERIC Educational Resources Information Center
Wild, Lauren G.; Flisher, Alan J.; Bhana, Arvin; Lombard, Carl
2004-01-01
Background: This study investigated associations among adolescents' self-esteem in 6 domains (peers, school, family, sports/athletics, body image and global self-worth) and risk behaviours related to substance use, bullying, suicidality and sexuality. Method: A multistage stratified sampling strategy was used to select a representative sample of…
Ancestral inference from haplotypes and mutations.
Griffiths, Robert C; Tavaré, Simon
2018-04-25
We consider inference about the history of a sample of DNA sequences, conditional upon the haplotype counts and the number of segregating sites observed at the present time. After deriving some theoretical results in the coalescent setting, we implement rejection sampling and importance sampling schemes to perform the inference. The importance sampling scheme addresses an extension of the Ewens Sampling Formula for a configuration of haplotypes and the number of segregating sites in the sample. The implementations include both constant and variable population size models. The methods are illustrated by two human Y chromosome datasets. Copyright © 2018. Published by Elsevier Inc.
Stratified Sampling Design Based on Data Mining
Kim, Yeonkook J.; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon
2013-01-01
Objectives To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. Methods We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Results Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. Conclusions This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea. PMID:24175117
Mathematics Skill of Fifteen Years Old Students in Yogyakarta in Solving Problems Like PISA
ERIC Educational Resources Information Center
Wulandari, Nidya Ferry; Jailani
2018-01-01
The aims of this research were to describe mathematics skill of 8th fifteen-year old students in Yogyakarta in solving problem of PISA. The sampling was combination of stratified and cluster random sampling. The sample consisting of 400 students was selected from fifteen schools. The data collection was by tests. The research finding revealed that…
A Pilot Sampling Design for Estimating Outdoor Recreation Site Visits on the National Forests
Stanley J. Zarnoch; S.M. Kocis; H. Ken Cordell; D.B.K. English
2002-01-01
A pilot sampling design is described for estimating site visits to National Forest System lands. The three-stage sampling design consisted of national forest ranger districts, site days within ranger districts, and last-exiting recreation visitors within site days. Stratification was used at both the primary and secondary stages. Ranger districts were stratified based...
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Yessica Rico; Marie-Stephanie Samain
2017-01-01
Investigating how genetic variation is distributed across the landscape is fundamental to inform forest conservation and restoration. Detecting spatial genetic discontinuities has value for defining management units, germplasm collection, and target sites for reforestation; however, inappropriate sampling schemes can misidentify patterns of genetic structure....
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon
2016-05-01
The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.
Park, Seok Chan; Kim, Minjung; Noh, Jaegeun; Chung, Hoeil; Woo, Youngah; Lee, Jonghwa; Kemper, Mark S
2007-06-12
The concentration of acetaminophen in a turbid pharmaceutical suspension has been measured successfully using Raman spectroscopy. The spectrometer was equipped with a large spot probe which enabled the coverage of a representative area during sampling. This wide area illumination (WAI) scheme (coverage area 28.3 mm2) for Raman data collection proved to be more reliable for the compositional determination of these pharmaceutical suspensions, especially when the samples were turbid. The reproducibility of measurement using the WAI scheme was compared to that of using a conventional small-spot scheme which employed a much smaller illumination area (about 100 microm spot size). A layer of isobutyric anhydride was placed in front of the sample vials to correct the variation in the Raman intensity due to the fluctuation of laser power. Corrections were accomplished using the isolated carbonyl band of isobutyric anhydride. The acetaminophen concentrations of prediction samples were accurately estimated using a partial least squares (PLS) calibration model. The prediction accuracy was maintained even with changes in laser power. It was noted that the prediction performance was somewhat degraded for turbid suspensions with high acetaminophen contents. When comparing the results of reproducibility obtained with the WAI scheme and those obtained using the conventional scheme, it was concluded that the quantitative determination of the active pharmaceutical ingredient (API) in turbid suspensions is much improved when employing a larger laser coverage area. This is presumably due to the improvement in representative sampling.
Jet-mixing of initially-stratified liquid-liquid pipe flows: experiments and numerical simulations
NASA Astrophysics Data System (ADS)
Wright, Stuart; Ibarra-Hernandes, Roberto; Xie, Zhihua; Markides, Christos; Matar, Omar
2016-11-01
Low pipeline velocities lead to stratification and so-called 'phase slip' in horizontal liquid-liquid flows due to differences in liquid densities and viscosities. Stratified flows have no suitable single point for sampling, from which average phase properties (e.g. fractions) can be established. Inline mixing, achieved by static mixers or jets in cross-flow (JICF), is often used to overcome liquid-liquid stratification by establishing unstable two-phase dispersions for sampling. Achieving dispersions in liquid-liquid pipeline flows using JICF is the subject of this experimental and modelling work. The experimental facility involves a matched refractive index liquid-liquid-solid system, featuring an ETFE test section, and experimental liquids which are silicone oil and a 51-wt% glycerol solution. The matching then allows the dispersed fluid phase fractions and velocity fields to be established through advanced optical techniques, namely PLIF (for phase) and PTV or PIV (for velocity fields). CFD codes using the volume of a fluid (VOF) method are then used to demonstrate JICF breakup and dispersion in stratified pipeline flows. A number of simple jet configurations are described and their dispersion effectiveness is compared with the experimental results. Funding from Cameron for Ph.D. studentship (SW) gratefully acknowledged.
Work-ability evaluation: a piece of cake or a hard nut to crack?
Slebus, Frans G; Sluiter, Judith K; Kuijer, P Paul F M; Willems, J Han H B M; Frings-Dresen, Monique H W
2007-08-30
To describe what aspects, categorized according to the ICF model, insurance physicians (IPs) take into account in assessing short- and long-term work-ability. An interview study on a random sample of 60 IPs of the Dutch National Institute for Employee Benefit Schemes, stratified by region and years of experience. In determining work-ability, a wide range of aspects were used. In the case of musculoskeletal disease, 75% of the IPs considered the 'function and structures' component important. With psychiatric and other diseases, however, the 'participation factor' component was considered important by 85 and 80%, respectively. Aspects relating to the 'environmental factor' and 'personal factor' components were mentioned as important by fewer than 25%. In assessing the short- and long-term prognosis of work-ability, the 'disease or disorder' component was primarily used with a rate of over 75%. In determining work-ability, insurance physicians predominantly consider aspects relating to the 'functions and structures' and 'participation' components of the ICF model important. The 'environmental factor' and 'personal factor' components were not often mentioned. In assessing the short- and long-term prognosis of work-ability, the 'disease or disorder' component was predominantly used. It can be argued that 'environmental factors' and 'personal factors' should also more often be used in assessing work-ability.
[Survey and analysis of major human parasitic diseases in Chongqing City].
Shan-Shan, Li; Fei, Luo; Jun, Xie; Yi, Yuan
2018-03-02
To investigate the epidemic of major human parasitic diseases in Chongqing City, so as to provide a reference for developing prevention and control strategies. According to the unified methods formulated by the national investigation scheme and stratified cluster random sampling, 36 rural pilots and 50 urban pilots were selected in Chongqing City. The number of the objects investigated in individual pilot was defined over 250. Totally 22 263 residents were detected. The overall infection rate of intestinal parasites was 5.41%. The infection rates of Ascaris lumbricoides , hookworm, Trichuris trichiura , and Enterobius vermicularis were 1.20%, 4.23%, 0.13% and 0.47% respectively. Only 0.22% of the infections were co-infections. The infection rate of overall intestinal parasites was statistically higher in the females than that in the males ( χ 2 = 15.19, P < 0.05), and the infection rates were significantly different among various age groups, occupations, education levels, and regions ( χ 2 = 15.19, 396.72, 421.07, 347.79, all P < 0.05). The infection rates of major human parasites in Chongqing show an obviously decreasing tendency compared with the rates of the past twice of national surveys. In the future, the controlling practices are obliged to focus on reducing the infection rates of soil-borne parasites.
On-line determination of transient stability status using multilayer perceptron neural network
NASA Astrophysics Data System (ADS)
Frimpong, Emmanuel Asuming; Okyere, Philip Yaw; Asumadu, Johnson
2018-01-01
A scheme to predict transient stability status following a disturbance is presented. The scheme is activated upon the tripping of a line or bus and operates as follows: Two samples of frequency deviation values at all generator buses are obtained. At each generator bus, the maximum frequency deviation within the two samples is extracted. A vector is then constructed from the extracted maximum frequency deviations. The Euclidean norm of the constructed vector is calculated and then fed as input to a trained multilayer perceptron neural network which predicts the stability status of the system. The scheme was tested using data generated from the New England test system. The scheme successfully predicted the stability status of all two hundred and five disturbance test cases.
Genome-wide meta-analyses of stratified depression in Generation Scotland and UK Biobank.
Hall, Lynsey S; Adams, Mark J; Arnau-Soler, Aleix; Clarke, Toni-Kim; Howard, David M; Zeng, Yanni; Davies, Gail; Hagenaars, Saskia P; Maria Fernandez-Pujals, Ana; Gibson, Jude; Wigmore, Eleanor M; Boutin, Thibaud S; Hayward, Caroline; Scotland, Generation; Porteous, David J; Deary, Ian J; Thomson, Pippa A; Haley, Chris S; McIntosh, Andrew M
2018-01-10
Few replicable genetic associations for Major Depressive Disorder (MDD) have been identified. Recent studies of MDD have identified common risk variants by using a broader phenotype definition in very large samples, or by reducing phenotypic and ancestral heterogeneity. We sought to ascertain whether it is more informative to maximize the sample size using data from all available cases and controls, or to use a sex or recurrent stratified subset of affected individuals. To test this, we compared heritability estimates, genetic correlation with other traits, variance explained by MDD polygenic score, and variants identified by genome-wide meta-analysis for broad and narrow MDD classifications in two large British cohorts - Generation Scotland and UK Biobank. Genome-wide meta-analysis of MDD in males yielded one genome-wide significant locus on 3p22.3, with three genes in this region (CRTAP, GLB1, and TMPPE) demonstrating a significant association in gene-based tests. Meta-analyzed MDD, recurrent MDD and female MDD yielded equivalent heritability estimates, showed no detectable difference in association with polygenic scores, and were each genetically correlated with six health-correlated traits (neuroticism, depressive symptoms, subjective well-being, MDD, a cross-disorder phenotype and Bipolar Disorder). Whilst stratified GWAS analysis revealed a genome-wide significant locus for male MDD, the lack of independent replication, and the consistent pattern of results in other MDD classifications suggests that phenotypic stratification using recurrence or sex in currently available sample sizes is currently weakly justified. Based upon existing studies and our findings, the strategy of maximizing sample sizes is likely to provide the greater gain.
Risk-Stratified Imputation in Survival Analysis
Kennedy, Richard E.; Adragni, Kofi P.; Tiwari, Hemant K.; Voeks, Jenifer H.; Brott, Thomas G.; Howard, George
2013-01-01
Background Censoring that is dependent on covariates associated with survival can arise in randomized trials due to changes in recruitment and eligibility criteria to minimize withdrawals, potentially leading to biased treatment effect estimates. Imputation approaches have been proposed to address censoring in survival analysis; and while these approaches may provide unbiased estimates of treatment effects, imputation of a large number of outcomes may over- or underestimate the associated variance based on the imputation pool selected. Purpose We propose an improved method, risk-stratified imputation, as an alternative to address withdrawal related to the risk of events in the context of time-to-event analyses. Methods Our algorithm performs imputation from a pool of replacement subjects with similar values of both treatment and covariate(s) of interest, that is, from a risk-stratified sample. This stratification prior to imputation addresses the requirement of time-to-event analysis that censored observations are representative of all other observations in the risk group with similar exposure variables. We compared our risk-stratified imputation to case deletion and bootstrap imputation in a simulated dataset in which the covariate of interest (study withdrawal) was related to treatment. A motivating example from a recent clinical trial is also presented to demonstrate the utility of our method. Results In our simulations, risk-stratified imputation gives estimates of treatment effect comparable to bootstrap and auxiliary variable imputation while avoiding inaccuracies of the latter two in estimating the associated variance. Similar results were obtained in analysis of clinical trial data. Limitations Risk-stratified imputation has little advantage over other imputation methods when covariates of interest are not related to treatment, although its performance is superior when covariates are related to treatment. Risk-stratified imputation is intended for categorical covariates, and may be sensitive to the width of the matching window if continuous covariates are used. Conclusions The use of the risk-stratified imputation should facilitate the analysis of many clinical trials, in which one group has a higher withdrawal rate that is related to treatment. PMID:23818434
Bugliosi, Edward F.; Miller, Todd S.; Reynolds, Richard J.
2014-01-01
The lithology, areal extent, and the water-table configuration in stratified-drift aquifers in the northern part of the Pony Hollow Creek valley in the Town of Newfield, New York, were mapped as part of an ongoing aquifer mapping program in Tompkins County. Surficial geologic and soil maps, well and test-boring records, light detection and ranging (lidar) data, water-level measurements, and passive-seismic surveys were used to map the aquifer geometry, construct geologic sections, and determine the depth to bedrock at selected locations throughout the valley. Additionally, water-quality samples were collected from selected streams and wells to characterize the quality of surface and groundwater in the study area. Sedimentary bedrock underlies the study area and is overlain by unstratified drift (till), stratified drift (glaciolacustrine and glaciofluvial deposits), and recent post glacial alluvium. The major type of unconsolidated, water-yielding material in the study area is stratified drift, which consists of glaciofluvial sand and gravel, and is present in sufficient amounts in most places to form an extensive unconfined aquifer throughout the study area, which is the source of water for most residents, farms, and businesses in the valleys. A map of the water table in the unconfined aquifer was constructed by using (1) measurements made between the mid-1960s through 2010, (2) control on the altitudes of perennial streams at 10-foot contour intervals from lidar data collected by Tompkins County, and (3) water surfaces of ponds and wetlands that are hydraulically connected to the unconfined aquifer. Water-table contours indicate that the direction of groundwater flow within the stratified-drift aquifer is predominantly from the valley walls toward the streams and ponds in the central part of the valley where groundwater then flows southwestward (down valley) toward the confluence with the Cayuta Creek valley. Locally, the direction of groundwater flow is radially away from groundwater mounds that have formed beneath upland tributaries that lose water where they flow on alluvial fans on the margins of the valley. In some places, groundwater that would normally flow toward streams is intercepted by pumping wells. Surface-water samples were collected in 2001 at four sites including Carter, Pony Hollow (two sites), and Chafee Creeks, and from six wells throughout the aquifer. Calcium dominates the cation composition and bicarbonate dominates the anion composition in groundwater and surface-water samples and none of the common inorganic constituents collected exceeded any Federal or State water-quality standards. Groundwater samples were collected from six wells all completed in the unconfined sand and gravel aquifer. Concentrations of calcium and magnesium dominated the ionic composition of the groundwater in all wells sampled. Nitrate, orthophosphate, and trace metals were detected in all groundwater samples, but none were more than U.S. Environmental Protection Agency or New York State Department of Health regulatory limits.
Scalable implementation of boson sampling with trapped ions.
Shen, C; Zhang, Z; Duan, L-M
2014-02-07
Boson sampling solves a classically intractable problem by sampling from a probability distribution given by matrix permanents. We propose a scalable implementation of boson sampling using local transverse phonon modes of trapped ions to encode the bosons. The proposed scheme allows deterministic preparation and high-efficiency readout of the bosons in the Fock states and universal mode mixing. With the state-of-the-art trapped ion technology, it is feasible to realize boson sampling with tens of bosons by this scheme, which would outperform the most powerful classical computers and constitute an effective disproof of the famous extended Church-Turing thesis.
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
Generation and coherent detection of QPSK signal using a novel method of digital signal processing
NASA Astrophysics Data System (ADS)
Zhao, Yuan; Hu, Bingliang; He, Zhen-An; Xie, Wenjia; Gao, Xiaohui
2018-02-01
We demonstrate an optical quadrature phase-shift keying (QPSK) signal transmitter and an optical receiver for demodulating optical QPSK signal with homodyne detection and digital signal processing (DSP). DSP on the homodyne detection scheme is employed without locking the phase of the local oscillator (LO). In this paper, we present an extracting one-dimensional array of down-sampling method for reducing unwanted samples of constellation diagram measurement. Such a novel scheme embodies the following major advantages over the other conventional optical QPSK signal detection methods. First, this homodyne detection scheme does not need strict requirement on LO in comparison with linear optical sampling, such as having a flat spectral density and phase over the spectral support of the source under test. Second, the LabVIEW software is directly used for recovering the QPSK signal constellation without employing complex DSP circuit. Third, this scheme is applicable to multilevel modulation formats such as M-ary PSK and quadrature amplitude modulation (QAM) or higher speed signals by making minor changes.
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
An Analysis of Job Satisfaction Among Public, College or University, and Special Librarians.
ERIC Educational Resources Information Center
Miniter, John J.
Usable data relating to six elements of job satisfaction: work, supervision, people, pay, promotion, and total satisfaction, were collected from 190 of a total sample of 310 librarians, chosen by stratified random sampling techniques from library association membership lists. The librarians, both male and female, represented three types of…
Factors Influencing Mathematic Problem-Solving Ability of Sixth Grade Students
ERIC Educational Resources Information Center
Pimta, Sakorn; Tayraukham, Sombat; Nuangchalerm, Prasart
2009-01-01
Problem statement: This study aims to investigate factors influencing mathematic problem-solving ability of sixth grade students. One thousand and twenty eight of sixth grade students, studying in the second semester of academic year 2007 were sampled by stratified random sampling technique. Approach: The research instruments used in the study…
Community of Inquiry Method and Language Skills Acquisition: Empirical Evidence
ERIC Educational Resources Information Center
Preece, Abdul Shakhour Duncan
2015-01-01
The study investigates the effectiveness of community of inquiry method in preparing students to develop listening and speaking skills in a sample of junior secondary school students in Borno state, Nigeria. A sample of 100 students in standard classes was drawn in one secondary school in Maiduguri metropolis through stratified random sampling…
An Australian Version of the Neighborhood Environment Walkability Scale: Validity Evidence
ERIC Educational Resources Information Center
Cerin, Ester; Leslie, Eva; Owen, Neville; Bauman, Adrian
2008-01-01
This study examined validity evidence for the Australian version of the Neighborhood Environment Walkability Scale (NEWS-AU). A stratified two-stage cluster sampling design was used to recruit 2,650 adults from Adelaide (Australia). The sample was drawn from residential addresses within eight high-walkable and eight low-walkable suburbs matched…
Production ecology of Thuja occidentalis
Philip V. Hofmeyer; Robert S. Seymour; Laura S. Kenefic
2010-01-01
Equations to predict branch and tree leaf area, foliar mass, and stemwood volume were developed from 25 destructively sampled northern white-cedar (Thuja occidentalis L.) trees, a species whose production ecology has not been studied. Resulting models were applied to a large sample of 296 cored trees from 60 sites stratified across a soil gradient...
A Study of the Effects of an Altered Workweek.
ERIC Educational Resources Information Center
Wood Educational Consultants, Edmonton (Alberta).
The purpose of this study was to examine the effects of organizational change arising from alterations in the structuring of the workweek. Data were collected from a stratified random sample of management and nonmanagement personnel employed within the various branches of the Alberta Department of Education. The sample consisted of 132 standard…
Academic Optimism and Organizational Citizenship Behaviour amongst Secondary School Teachers
ERIC Educational Resources Information Center
Makvandi, Abdollah; Naderi, Farah; Makvandi, Behnam; Pasha, Reza; Ehteshamzadeh, Parvin
2018-01-01
The purpose of the study was to investigate the simple and multiple relationships between academic optimism and organizational-citizenship behavior amongst high school teachers in Ramhormoz, Iran. The sample consisted of 250 (125 female and 125 male) teachers, selected by stratified random sampling in 2016- 2017. The measurement tools included…
ERIC Educational Resources Information Center
Diehl, Manfred; Chui, Helena; Hay, Elizabeth L.; Lumley, Mark A.; Grühn, Daniel; Labouvie-Vief, Gisela
2014-01-01
This study examined longitudinal changes in coping and defense mechanisms in an age- and gender-stratified sample of 392 European American adults. Nonlinear age-related changes were found for the coping mechanisms of sublimation and suppression and the defense mechanisms of intellectualization, doubt, displacement, and regression. The change…
ERIC Educational Resources Information Center
Akbulut, Yavuz; Odabasi, H. Ferhan; Kuzu, Abdullah
2011-01-01
This study explored the views of pre-service teachers regarding the indicators of information and communication technologies (ICT) at Turkish education faculties. A cross-sectional survey design was implemented with graduating students enrolled in Turkish education faculties. A combination of stratified random sampling and systematic sampling was…
ERIC Educational Resources Information Center
Yihong, Gao; Yuan, Zhao; Ying, Cheng; Yan, Zhou
2007-01-01
This study investigated the relationship between English learning motivation types and self-identity changes among university students in the People's Republic of China. The sample obtained from a stratified sampling consisted of 2,278 undergraduates from 30 universities in 29 regions. The instrument was a Likert-scale questionnaire which included…
Kim, Hee-Young; Kim, Seung-Kyu; Kang, Dong-Mug; Hwang, Yong-Sik; Oh, Jeong-Eun
2014-02-01
Serum samples were collected from volunteers of various ages and both genders using a proportionate stratified sampling method, to assess the exposure of the general population in Busan, South Korea to perfluorinated compounds (PFCs). 16 PFCs were investigated in serum samples from 306 adults (124 males and 182 females) and one day composite diet samples (breakfast, lunch, and dinner) from 20 of the serum donors, to investigate the relationship between food and serum PFC concentrations. Perfluorooctanoic acid and perfluorooctanesulfonic acid were the dominant PFCs in the serum samples, with mean concentrations of 8.4 and 13 ng/mL, respectively. Perfluorotridecanoic acid was the dominant PFC in the composite food samples, ranging from
Song, Y; Wang, M; Xie, J; Li, W; Zhang, X; Wang, T; Tan, G
2015-11-01
To investigate the prevalence of allergic rhinitis among elementary and middle school students and examine its impact on their quality of life. Stratified sampling and cluster sampling surveys were performed among 10-17-year-old students in Changsha city from June 2011 to April 2012. In the stratified sampling survey, the self-reported allergic rhinitis rate was 42.5 per cent. Further examination demonstrated that the average prevalence of allergic rhinitis was 19.4 per cent. The cluster sampling survey demonstrated that 214 of 814 students appeared to be atopic (26.3 per cent). The prevalence of allergic rhinitis and asthma was 17.2 and 2.1 per cent, respectively. In total, 71 atopic individuals (8.7 per cent) were without any symptoms of allergic disease. Further analysis showed that allergic rhinitis influenced the students' sleep, emotions and memory (p < 0.001). The prevalence of allergic rhinitis was 15.8 -19.4 per cent, showing an increase with age. Allergic rhinitis affected students' sleep, emotions and memory.
Simplified two-dimensional microwave imaging scheme using metamaterial-loaded Vivaldi antenna
NASA Astrophysics Data System (ADS)
Johari, Esha; Akhter, Zubair; Bhaskar, Manoj; Akhtar, M. Jaleel
2017-03-01
In this paper, a highly efficient, low-cost scheme for two-dimensional microwave imaging is proposed. To this end, the AZIM (anisotropic zero index metamaterial) cell-loaded Vivaldi antenna is designed and tested as effective electromagnetic radiation beam source required in the microwave imaging scheme. The designed antenna is first individually tested in the anechoic chamber, and its directivity along with the radiation pattern is obtained. The measurement setup for the imaging here involves a vector network analyzer, the AZIM cell-loaded ultra-wideband Vivaldi antenna, and other associated microwave components. The potential of the designed antenna for the microwave imaging is tested by first obtaining the two-dimensional reflectivity images of metallic samples of different shapes placed in front of the antenna, using the proposed scheme. In the next step, these sets of samples are hidden behind wooden blocks of different thicknesses and the reflectivity image of the test media is reconstructed by using the proposed scheme. Finally, the reflectivity images of various dielectric samples (Teflon, Plexiglas, permanent magnet moving coil) along with the copper sheet placed on a piece of cardboard are reconstructed by using the proposed setup. The images obtained for each case are plotted and compared with the actual objects, and a close match is observed which shows the applicability of the proposed scheme for through-wall imaging and the detection of concealed objects.
Grady, S.J.; Weaver, M.F.
1988-01-01
The stratified-drift aquifers that underlie 7.9 sq mi of the Potatuck and 12.7 sq mi of the Pomperaug River valley, CT, consist primarily of sand and gravel deposits up to 150 ft thick. Average horizontal hydraulic conductivity of the stratified drift ranges from 20 to 170 ft/day, and groundwater flows through the aquifers at an average rate of 2 to 3 ft/day. Land use in the study areas is changing from primarily undeveloped or agricultural lands to expanding residential, commercial, and light-industrial uses. Water quality data for 1923-82, that include 127 partial chemical analyses of groundwater samples from 38 wells in the two aquifers, were augmented by sampling during 1985 from 21 new stainless-steel wells for selected major inorganic constituents, trace elements, and organic chemicals. Nonparametric statistical procedures were used to compare the water quality data from four land use areas, for the two sampling periods, and between the two aquifers. Human activities associated with agricultural, residential, and industrial/commercial land uses have affected the quality of water in the stratified-drift aquifers underlying these land use areas. Statistical comparisons of water quality data between land use areas show significant differences, with the apparent relations between land use and groundwater being: (1) Median concentrations of most groundwater constituents are smallest in undeveloped areas; (2) Groundwater in agricultural areas has the largest median sulfate and total ammonia plus organic nitrogen concentrations. Agricultural areas are also characterized by groundwater with significantly greater median specific conductance, noncarbonate hardness, carbon dioxide, and magnesium concentrations relative to undeveloped areas; (3) Median concentrations of most major inorganic constituents, excluding potassium, sulfate, and total ammonia plus organic nitrogen, are greater in groundwater in residential areas than in undeveloped and agricultural areas. (4) Groundwater in industrial/commercial areas has the greatest median specific conductance, pH, carbon dioxide, calcium, magnesium, chloride bicarbonate, dissolved solids, boron, and strontium concentrations. (Author 's abstract)
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Tian, Wei; Han, Xu; Zuo, Wangda; ...
2018-01-31
This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Wei; Han, Xu; Zuo, Wangda
This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.
Enhanced Conformational Sampling of N-Glycans in Solution with Replica State Exchange Metadynamics.
Galvelis, Raimondas; Re, Suyong; Sugita, Yuji
2017-05-09
Molecular dynamics (MD) simulation of a N-glycan in solution is challenging because of high-energy barriers of the glycosidic linkages, functional group rotational barriers, and numerous intra- and intermolecular hydrogen bonds. In this study, we apply different enhanced conformational sampling approaches, namely, metadynamics (MTD), the replica-exchange MD (REMD), and the recently proposed replica state exchange MTD (RSE-MTD), to a N-glycan in solution and compare the conformational sampling efficiencies of the approaches. MTD helps to cross the high-energy barrier along the ω angle by utilizing a bias potential, but it cannot enhance sampling of the other degrees of freedom. REMD ensures moderate-energy barrier crossings by exchanging temperatures between replicas, while it hardly crosses the barriers along ω. In contrast, RSE-MTD succeeds to cross the high-energy barrier along ω as well as to enhance sampling of the other degrees of freedom. We tested two RSE-MTD schemes: in one scheme, 64 replicas were simulated with the bias potential along ω at different temperatures, while simulations of four replicas were performed with the bias potentials for different CVs at 300 K. In both schemes, one unbiased replica at 300 K was included to compute conformational properties of the glycan. The conformational sampling of the former is better than the other enhanced sampling methods, while the latter shows reasonable performance without spending large computational resources. The latter scheme is likely to be useful when a N-glycan-attached protein is simulated.
Performance of four turbulence closure models implemented using a generic length scale method
Warner, J.C.; Sherwood, C.R.; Arango, H.G.; Signell, R.P.
2005-01-01
A two-equation turbulence model (one equation for turbulence kinetic energy and a second for a generic turbulence length-scale quantity) proposed by Umlauf and Burchard [J. Marine Research 61 (2003) 235] is implemented in a three-dimensional oceanographic model (Regional Oceanographic Modeling System; ROMS v2.0). These two equations, along with several stability functions, can represent many popular turbulence closures, including the k-kl (Mellor-Yamada Level 2.5), k-??, and k-?? schemes. The implementation adds flexibility to the model by providing an unprecedented range of turbulence closure selections in a single 3D oceanographic model and allows comparison and evaluation of turbulence models in an otherwise identical numerical environment. This also allows evaluation of the effect of turbulence models on other processes such as suspended-sediment distribution or ecological processes. Performance of the turbulence models and sediment-transport schemes is investigated with three test cases for (1) steady barotropic flow in a rectangular channel, (2) wind-induced surface mixed-layer deepening in a stratified fluid, and (3) oscillatory stratified pressure-gradient driven flow (estuarine circulation) in a rectangular channel. Results from k-??, k-??, and gen (a new closure proposed by Umlauf and Burchard [J. Marine Research 61 (2003) 235]) are very similar for these cases, but the k-kl closure results depend on a wall-proximity function that must be chosen to suit the flow. Greater variations appear in simulations of suspended-sediment concentrations than in salinity simulations because the transport of suspended-sediment amplifies minor variations in the methods. The amplification is caused by the added physics of a vertical settling rate, bottom stress dependent resuspension, and diffusive transport of sediment in regions of well mixed salt and temperature. Despite the amplified sensitivity of sediment to turbulence models in the estuary test case, the four closures investigated here all generated estuarine turbidity maxima that were similar in their shape, location, and concentrations.
Precompetitive achievement goals, stress appraisals, emotions, and coping among athletes.
Nicholls, Adam R; Perry, John L; Calmeiro, Luis
2014-10-01
Grounded in Lazarus's (1991, 1999, 2000) cognitive-motivational-relational theory of emotions, we tested a model of achievement goals, stress appraisals, emotions, and coping. We predicted that precompetitive achievement goals would be associated with appraisals, appraisals with emotions, and emotions with coping in our model. The mediating effects of emotions among the overall sample of 827 athletes and two stratified random subsamples were also explored. The results of this study support our proposed model in the overall sample and the stratified subsamples. Further, emotion mediated the relationship between appraisal and coping. Mediation analyses revealed that there were indirect effects of pleasant and unpleasant emotions, which indicates the importance of examining multiple emotions to reveal a more accurate representation of the overall stress process. Our findings indicate that both appraisals and emotions are just as important in shaping coping.
Development of WAIS-III General Ability Index Minus WMS-III memory discrepancy scores.
Lange, Rael T; Chelune, Gordon J; Tulsky, David S
2006-09-01
Analysis of the discrepancy between intellectual functioning and memory ability has received some support as a useful means for evaluating memory impairment. In recent additions to Wechlser scale interpretation, the WAIS-III General Ability Index (GAI) and the WMS-III Delayed Memory Index (DMI) were developed. The purpose of this investigation is to develop base rate data for GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores using data from the WAIS-III/WMS-III standardization sample (weighted N = 1250). Base rate tables were developed using the predicted-difference method and two simple-difference methods (i.e., stratified and non-stratified). These tables provide valuable data for clinical reference purposes to determine the frequency of GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores in the WAIS-III/WMS-III standardization sample.
Temporal variation of phytoplankton in a small tropical crater lake, Costa Rica.
Umaña-Villalobos, Gerardo
2010-12-01
The temporal variation in lake's phytoplankton is important to understand its general biodiversity. For tropical lakes, it has been hypothesized that they follow a similar pattern as temperate ones, on a much accelerated pace; nevertheless, few case studies have tried to elucidate this. Most studies in Costa Rica have used a monthly sampling scheme and failed in showing the expected changes. In this study, the phytoplankton of the small Barvas's crater lake was followed for more than three years, first with monthly and later with weekly samplings, that covered almost two years. Additional information on temperature and oxygen vertical profiles was obtained on a monthly basis, and surface temperature was measured during weekly samplings around noon. Results showed that in spite of its shallow condition (max. depth: 7m) and low surface temperature (11 to 19 degrees C), the lake stratifies at least for brief periods. The phytoplankton showed both, rapid change periods, and prolonged ones of relative stasis. The plankton composition fluctuated between three main phases, one characterized by the abundance of small sized desmids (Staurastrum paradoxum, Cosmarium asphaerosporum), a second phase dominated by equally small cryptomonads (Chryptochrysis minor, Chroomonas sp.) and a third phase dominated by the green alga Eutetramorus tetrasporus. Although data evidenced that monthly sampling could miss short term events, the temporal variation did not follow the typical dry and rainy seasons of the region, or any particular annual pattern. Year to year variation was high. As this small lake is located at the summit of Barva Volcano and receives the influence from both the Caribbean and the Pacific weather, seasonality at the lake is not clearly defined as in the rest of the country and short term variations in the local weather might have a stronger effect than broad seasonal trends. The occurrence of this short term changes in the phytoplankton of small tropical lakes in response to weather variations needs to be further explored in other lakes.
NASA Astrophysics Data System (ADS)
Rapaka, Narsimha R.; Sarkar, Sutanu
2016-10-01
A sharp-interface Immersed Boundary Method (IBM) is developed to simulate density-stratified turbulent flows in complex geometry using a Cartesian grid. The basic numerical scheme corresponds to a central second-order finite difference method, third-order Runge-Kutta integration in time for the advective terms and an alternating direction implicit (ADI) scheme for the viscous and diffusive terms. The solver developed here allows for both direct numerical simulation (DNS) and large eddy simulation (LES) approaches. Methods to enhance the mass conservation and numerical stability of the solver to simulate high Reynolds number flows are discussed. Convergence with second-order accuracy is demonstrated in flow past a cylinder. The solver is validated against past laboratory and numerical results in flow past a sphere, and in channel flow with and without stratification. Since topographically generated internal waves are believed to result in a substantial fraction of turbulent mixing in the ocean, we are motivated to examine oscillating tidal flow over a triangular obstacle to assess the ability of this computational model to represent nonlinear internal waves and turbulence. Results in laboratory-scale (order of few meters) simulations show that the wave energy flux, mean flow properties and turbulent kinetic energy agree well with our previous results obtained using a body-fitted grid (BFG). The deviation of IBM results from BFG results is found to increase with increasing nonlinearity in the wave field that is associated with either increasing steepness of the topography relative to the internal wave propagation angle or with the amplitude of the oscillatory forcing. LES is performed on a large scale ridge, of the order of few kilometers in length, that has the same geometrical shape and same non-dimensional values for the governing flow and environmental parameters as the laboratory-scale topography, but significantly larger Reynolds number. A non-linear drag law is utilized in the large-scale application to parameterize turbulent losses due to bottom friction at high Reynolds number. The large scale problem exhibits qualitatively similar behavior to the laboratory scale problem with some differences: slightly larger intensification of the boundary flow and somewhat higher non-dimensional values for the energy fluxed away by the internal wave field. The phasing of wave breaking and turbulence exhibits little difference between small-scale and large-scale obstacles as long as the important non-dimensional parameters are kept the same. We conclude that IBM is a viable approach to the simulation of internal waves and turbulence in high Reynolds number stratified flows over topography.
Robin M. Reich; Hans T. Schreuder
2006-01-01
The sampling strategy involving both statistical and in-place inventory information is presented for the natural resources project of the Green Belt area (Centuron Verde) in the Mexican state of Jalisco. The sampling designs used were a grid based ground sample of a 90x90 m plot and a two-stage stratified sample of 30 x 30 m plots. The data collected were used to...
Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews
2016-06-01
The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjerstedt, T.W.
The Price Formation in southern West Virginia was deposited dominantly in an oxygen-deficient, outer shelf environment along a siliciclastic profile from the basin plain to the alluvial plain. An overturned section at Bluefield, West Virginia, exposes the following lithofacies and environments in ascending order: laminated black silt-shales from the basin plain; a sand-rich submarine fan; outer shelf tempestites of hummocky, cross-stratified fine sandstone and completely bioturbated grayish-black, silt-shales; and shoreline sands in transition to thin, dirty coals of the coastal plain and Maccrady red beds of the alluvial plain. Trace fossils are abundant and are best preserved on the solesmore » of hummocky, cross-stratified sandstones. The Zoophycos ichnofacies occurs throughout 80 m of outer shelf deposits, which accumulated above storm wave base. The Zoophycos ichnofacies grades into the nearshore Skolithus ichnofacies with no apparent intervening Cruziana ichnofacies. Most ichnotaxa identified from the outer shelf are fodinichnia or pascichnia. Planar and helical Zoophycos, Helminthopsis, Helminthoida, Sclarituba (neonereites form), and Chondrites are characteristics. In most schemes, the Zoophycos ichnofacies occurs below storm wave base. At Bluefield, it has displaced the Cruziana ichnofacies above storm wave base due to the maintenance of a dysaerobic environment. The abundant organic matter preserved in a density-stratified water column was continually replenished during periods of upwelling. Conditions were extremely favorable for deposit feeders, but inhibiting to suspension feeders that were less tolerant of oxygen stress. The absence of distributary channel sands in the vertical sequence also indicates that offshore environments received no influx of oxygenated waters from the Price delta.« less
NASA Astrophysics Data System (ADS)
Sodemann, H.; Foken, Th.
2003-04-01
General Circulation Models calculate the energy exchange between surface and atmosphere by means of parameterisations for turbulent fluxes of momentum and heat in the surface layer. However, currently implemented parameterisations after Louis (1979) create large discrepancies between predictions and observational data, especially in stably stratified surface layers. This work evaluates a new surface layer parameterisation proposed by Zilitinkevich et al. (2002), which was specifically developed to improve energy flux predictions in stable stratification. The evaluation comprises a detailed study of important surface layer characteristics, a sensitivity study of the parameterisation, and a direct comparison to observational data from Antarctica and predictions by the Louis (1979) parameterisation. The stability structure of the stable surface layer was found to be very complex, and strongly influenced fluxes in the surface layer. The sensitivity study revealed that the new parameterisation depends strongly on the ratio between roughness length and roughness temperature, which were both observed to be very variable parameters. The comparison between predictions and measurements showed good agreement for momentum fluxes, but large discrepancies for heat fluxes. A stability dependent evaluation of selected data showed better agreement for the new parameterisation of Zilitinkevich et al. (2002) than for the Louis (1979) scheme. Nevertheless, this comparison underlines the need for more detailed and physically sound concepts for parameterisations of heat fluxes in stably stratified surface layers. Zilitinkevich, S. S., V. Perov and J. C. King (2002). "Near-surface turbulent fluxes in stable stratification: Calculation techniques for use in General Circulation Models." Q. J. R. Meteorol. Soc. 128(583): 1571--1587. Louis, J. F. (1979). "A Parametric Model of Vertical Eddy Fluxes in the Atmosphere." Bound.-Layer Meteor. 17(2): 187--202.
Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S
2015-11-10
In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.
Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus
NASA Astrophysics Data System (ADS)
Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta
2006-11-01
The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.
Analysis and design of digital output interface devices for gas turbine electronic controls
NASA Technical Reports Server (NTRS)
Newirth, D. M.; Koenig, E. W.
1976-01-01
A trade study was performed on twenty-one digital output interface schemes for gas turbine electronic controls to select the most promising scheme based on criteria of reliability, performance, cost, and sampling requirements. The most promising scheme, a digital effector with optical feedback of the fuel metering valve position, was designed.
Sengaloundeth, Sivong; Green, Michael D; Fernández, Facundo M; Manolin, Ot; Phommavong, Khamlieng; Insixiengmay, Vongsavanh; Hampton, Christina Y; Nyadong, Leonard; Mildenhall, Dallas C; Hostetler, Dana; Khounsaknalath, Lamphet; Vongsack, Latsamy; Phompida, Samlane; Vanisaveth, Viengxay; Syhakhang, Lamphone; Newton, Paul N
2009-01-01
Background Counterfeit oral artesunate has been a major public health problem in mainland SE Asia, impeding malaria control. A countrywide stratified random survey was performed to determine the availability and quality of oral artesunate in pharmacies and outlets (shops selling medicines) in the Lao PDR (Laos). Methods In 2003, 'mystery' shoppers were asked to buy artesunate tablets from 180 outlets in 12 of the 18 Lao provinces. Outlets were selected using stratified random sampling by investigators not involved in sampling. Samples were analysed for packaging characteristics, by the Fast Red Dye test, high-performance liquid chromatography (HPLC), mass spectrometry (MS), X-ray diffractometry and pollen analysis. Results Of 180 outlets sampled, 25 (13.9%) sold oral artesunate. Outlets selling artesunate were more commonly found in the more malarious southern Laos. Of the 25 outlets, 22 (88%; 95%CI 68–97%) sold counterfeit artesunate, as defined by packaging and chemistry. No artesunate was detected in the counterfeits by any of the chemical analysis techniques and analysis of the packaging demonstrated seven different counterfeit types. There was complete agreement between the Fast Red dye test, HPLC and MS analysis. A wide variety of wrong active ingredients were found by MS. Of great concern, 4/27 (14.8%) fakes contained detectable amounts of artemisinin (0.26–115.7 mg/tablet). Conclusion This random survey confirms results from previous convenience surveys that counterfeit artesunate is a severe public health problem. The presence of artemisinin in counterfeits may encourage malaria resistance to artemisinin derivatives. With increasing accessibility of artemisinin-derivative combination therapy (ACT) in Laos, the removal of artesunate monotherapy from pharmacies may be an effective intervention. PMID:19638225
Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey
Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.
2014-01-01
We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic Flyway Breeding Waterfowl Survey.
Sengaloundeth, Sivong; Green, Michael D; Fernández, Facundo M; Manolin, Ot; Phommavong, Khamlieng; Insixiengmay, Vongsavanh; Hampton, Christina Y; Nyadong, Leonard; Mildenhall, Dallas C; Hostetler, Dana; Khounsaknalath, Lamphet; Vongsack, Latsamy; Phompida, Samlane; Vanisaveth, Viengxay; Syhakhang, Lamphone; Newton, Paul N
2009-07-28
Counterfeit oral artesunate has been a major public health problem in mainland SE Asia, impeding malaria control. A countrywide stratified random survey was performed to determine the availability and quality of oral artesunate in pharmacies and outlets (shops selling medicines) in the Lao PDR (Laos). In 2003, 'mystery' shoppers were asked to buy artesunate tablets from 180 outlets in 12 of the 18 Lao provinces. Outlets were selected using stratified random sampling by investigators not involved in sampling. Samples were analysed for packaging characteristics, by the Fast Red Dye test, high-performance liquid chromatography (HPLC), mass spectrometry (MS), X-ray diffractometry and pollen analysis. Of 180 outlets sampled, 25 (13.9%) sold oral artesunate. Outlets selling artesunate were more commonly found in the more malarious southern Laos. Of the 25 outlets, 22 (88%; 95%CI 68-97%) sold counterfeit artesunate, as defined by packaging and chemistry. No artesunate was detected in the counterfeits by any of the chemical analysis techniques and analysis of the packaging demonstrated seven different counterfeit types. There was complete agreement between the Fast Red dye test, HPLC and MS analysis. A wide variety of wrong active ingredients were found by MS. Of great concern, 4/27 (14.8%) fakes contained detectable amounts of artemisinin (0.26-115.7 mg/tablet). This random survey confirms results from previous convenience surveys that counterfeit artesunate is a severe public health problem. The presence of artemisinin in counterfeits may encourage malaria resistance to artemisinin derivatives. With increasing accessibility of artemisinin-derivative combination therapy (ACT) in Laos, the removal of artesunate monotherapy from pharmacies may be an effective intervention.
Information content of household-stratified epidemics.
Kinyanjui, T M; Pellis, L; House, T
2016-09-01
Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Johnelle Sparks, P
2009-11-01
To examine disparities in low birthweight using a diverse set of racial/ethnic categories and a nationally representative sample. This research explored the degree to which sociodemographic characteristics, health care access, maternal health status, and health behaviors influence birthweight disparities among seven racial/ethnic groups. Binary logistic regression models were estimated using a nationally representative sample of singleton, normal for gestational age births from 2001 using the ECLS-B, which has an approximate sample size of 7,800 infants. The multiple variable models examine disparities in low birthweight (LBW) for seven racial/ethnic groups, including non-Hispanic white, non-Hispanic black, U.S.-born Mexican-origin Hispanic, foreign-born Mexican-origin Hispanic, other Hispanic, Native American, and Asian mothers. Race-stratified logistic regression models were also examined. In the full sample models, only non-Hispanic black mothers have a LBW disadvantage compared to non-Hispanic white mothers. Maternal WIC usage was protective against LBW in the full models. No prenatal care and adequate plus prenatal care increase the odds of LBW. In the race-stratified models, prenatal care adequacy and high maternal health risks are the only variables that influence LBW for all racial/ethnic groups. The race-stratified models highlight the different mechanism important across the racial/ethnic groups in determining LBW. Differences in the distribution of maternal sociodemographic, health care access, health status, and behavior characteristics by race/ethnicity demonstrate that a single empirical framework may distort associations with LBW for certain racial and ethnic groups. More attention must be given to the specific mechanisms linking maternal risk factors to poor birth outcomes for specific racial/ethnic groups.
ERIC Educational Resources Information Center
Sekar, J. Master Arul; Lawrence, A.S. Arul
2016-01-01
The present study aims to investigate whether there is any significant relationship between adjustment and academic achievement of higher secondary school students. In this survey study, the investigators used stratified random sampling technique for selecting the sample from the population. The stratification was done on the basis of gender and…
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
Jackknifing Techniques for Evaluation of Equating Accuracy. Research Report. ETS RR-09-39
ERIC Educational Resources Information Center
Haberman, Shelby J.; Lee, Yi-Hsuan; Qian, Jiahe
2009-01-01
Grouped jackknifing may be used to evaluate the stability of equating procedures with respect to sampling error and with respect to changes in anchor selection. Properties of grouped jackknifing are reviewed for simple-random and stratified sampling, and its use is described for comparisons of anchor sets. Application is made to examples of item…
ERIC Educational Resources Information Center
Jalali, Zohreh; Heidari, Alireza
2016-01-01
The research aimed to investigate the relationship between happiness, subjective well-being, creativity and job performance of primary school teachers in Ramhormoz City. Hence, a sample of 330 individuals was selected through random stratified sampling. The research tools included Oxford Happiness Inventory, Subjective Well-being Scale by Keyes…
Sexual Abuse among Female High School Students in Istanbul, Turkey
ERIC Educational Resources Information Center
Alikasifoglu, Mujgan; Erginoz, Ethem; Ercan, Oya; Albayrak-Kaymak, Deniz; Uysal, Omer; Ilter, Ozdemir
2006-01-01
Objective: The objective of the study was to determine the prevalence of sexual abuse in female adolescents in Istanbul, Turkey from data collected as part of a school-based population study on health and health behaviors. Method: A stratified cluster sampling procedure was used for this cross-sectional study. The study sample included 1,955…
ERIC Educational Resources Information Center
Chiner, Esther; Cardona, Maria Cristina
2013-01-01
This study examined regular education teachers' perceptions of inclusion in elementary and secondary schools in Spain and how these perceptions may differ depending on teaching experience, skills, and the availability of resources and supports. Stratified random sampling procedures were used to draw a representative sample of 336 general education…
Strategies for Coping with the Challenges of Incarceration among Nigerian Prison Inmates
ERIC Educational Resources Information Center
Agbakwuru, Chikwe; Awujo, Grace C.
2016-01-01
This paper investigated the strategies for coping with the challenges of incarceration among inmates of Port Harcourt Prison, Nigeria. The population was 2,997 inmates of the prison while the sample was 250 inmates drawn through stratified random sampling technique from the same Port Harcourt prison. Six research questions were posed and data for…
Nonmanufacturing Businesses. U.S. Metric Study Interim Report.
ERIC Educational Resources Information Center
Cornog, June R.; Bunten, Elaine D.
In this fifth interim report on the feasibility of a United States changeover to a metric system stems from the U.S. Metric Study, a primary stratified sample of 2,828 nonmanufacturing firms was randomly selected from 28,184 businesses taken from Social Security files, a secondary sample of 2,258 firms was randomly selected for replacement…
ERIC Educational Resources Information Center
Sezgin, Ferudun; Erdogan, Onur
2015-01-01
This study explores the predictive influence of primary school teachers' academic optimism, hope and zest for work on perceptions of their self-efficacy and success. A total of 600 teachers were selected through stratified sampling from 27 primary schools in central districts of Ankara, Turkey, to form the research sample. Intervariable…
ERIC Educational Resources Information Center
Bruening, Thomas H.; Martin, Robert A.
A sample of 731 farmers was surveyed to identify perceptions regarding selected soil and water conservation practices. The sample was stratified and proportioned by conservation district to have a representative group of respondents across Iowa. Items on the mailed questionnaire were designed to assess perceptions regarding issues in soil and…
ERIC Educational Resources Information Center
Thomas, Hollie B.; Neavill, Arthur
Based on questionnaire data collected from a sample of employers, this phase of a larger research project ascertained employment opportunities in the area of applied biological and agricultural occupations in the metropolitan area of Chicago. Specific fields of business surveyed by stratified random sample were animal care, animal health care,…
Academic Research Equipment in Selected Science Engineering Fields: 1982-83 to 1985-86.
ERIC Educational Resources Information Center
Burgdorf, Kenneth; Chaney, Bradford
This report presents information for identification of the national trends in the amount, age, loss, condition, and perceived adequacy of academic research equipment in selected science and engineering fields. The data were obtained from a stratified probability sample of 55 colleges and universities and from a separately selected sample of 24…
Society Membership Survey: 1986 Salaries.
ERIC Educational Resources Information Center
Skelton, W. Keith; And Others
The fourth in a series of reports produced by the Education and Employment Statistics division of the American Insititute of Physics (AIP) is presented. Data are based on a stratified random sample survey of one-sixth of the U.S. and Canadian membership of the AIP member societies. In the spring of 1986, every individual in the sample received a…
Teaching Aptitude of Student Teachers and their Academic Achievements at Graduate Level
ERIC Educational Resources Information Center
Sajan, K. S.
2010-01-01
The present investigation aims at studying teaching aptitude of student teachers with respect to their gender and academic achievement at graduate level examination. The sample for this study is selected by stratified random sampling from the Teacher Education institutions of Malabar area of Kerala. Teaching Aptitude Test Battery (T A T B)…
Problems of Female School Teachers in District Kulgam (J&K)
ERIC Educational Resources Information Center
Rashid, Ruhee; Maharashi, Santosh Kumar
2015-01-01
The purpose of this study is to find the problems of employed female school teachers in district Kulgam. Sample of 100 employed women are selected from different education institutions as 20 Rehaber e Taleem (ReT) female teachers, 40 female teachers, 20 female masters and 20 female lecturers using stratified random sampling. In this study we use…
ERIC Educational Resources Information Center
Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin
2017-01-01
The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…
ERIC Educational Resources Information Center
Wilson, Gary; Newcomb, L. H.
A study was conducted to determine the relationship of certain motivational appeals to the extent of participation of extension clientele, as perceived by these clientele. A stratified random sample of thirty counties from the ten extension supervisory areas of Ohio was used for the study. This sample provided for 395 adult agricultural clientele…
Examination of Spectral Transformations on Spectral Mixture Analysis
NASA Astrophysics Data System (ADS)
Deng, Y.; Wu, C.
2018-04-01
While many spectral transformation techniques have been applied on spectral mixture analysis (SMA), few study examined their necessity and applicability. This paper focused on exploring the difference between spectrally transformed schemes and untransformed scheme to find out which transformed scheme performed better in SMA. In particular, nine spectrally transformed schemes as well as untransformed scheme were examined in two study areas. Each transformed scheme was tested 100 times using different endmember classes' spectra under the endmember model of vegetation- high albedo impervious surface area-low albedo impervious surface area-soil (V-ISAh-ISAl-S). Performance of each scheme was assessed based on mean absolute error (MAE). Statistical analysis technique, Paired-Samples T test, was applied to test the significance of mean MAEs' difference between transformed and untransformed schemes. Results demonstrated that only NSMA could exceed the untransformed scheme in all study areas. Some transformed schemes showed unstable performance since they outperformed the untransformed scheme in one area but weakened the SMA result in another region.
NASA Technical Reports Server (NTRS)
Turcotte, Kevin M.; Kramber, William J.; Venugopal, Gopalan; Lulla, Kamlesh
1989-01-01
Previous studies have shown that a good relationship exists between AVHRR Normalized Difference Vegetation Index (NDVI) measurements, and both regional-scale patterns of vegetation seasonality and productivity. Most of these studies used known samples of vegetation types. An alternative approach, and the objective was to examine the above relationships by analyzing one year of AVHRR NDVI data that was stratified using a small-scale vegetation map of Mexico. The results show that there is a good relationship between AVHRR NDVI measurements and regional-scale vegetation dynamics of Mexico.
St. Onge, K. R.; Palmé, A. E.; Wright, S. I.; Lascoux, M.
2012-01-01
Most species have at least some level of genetic structure. Recent simulation studies have shown that it is important to consider population structure when sampling individuals to infer past population history. The relevance of the results of these computer simulations for empirical studies, however, remains unclear. In the present study, we use DNA sequence datasets collected from two closely related species with very different histories, the selfing species Capsella rubella and its outcrossing relative C. grandiflora, to assess the impact of different sampling strategies on summary statistics and the inference of historical demography. Sampling strategy did not strongly influence the mean values of Tajima’s D in either species, but it had some impact on the variance. The general conclusions about demographic history were comparable across sampling schemes even when resampled data were analyzed with approximate Bayesian computation (ABC). We used simulations to explore the effects of sampling scheme under different demographic models. We conclude that when sequences from modest numbers of loci (<60) are analyzed, the sampling strategy is generally of limited importance. The same is true under intermediate or high levels of gene flow (4Nm > 2–10) in models in which global expansion is combined with either local expansion or hierarchical population structure. Although we observe a less severe effect of sampling than predicted under some earlier simulation models, our results should not be seen as an encouragement to neglect this issue. In general, a good coverage of the natural range, both within and between populations, will be needed to obtain a reliable reconstruction of a species’s demographic history, and in fact, the effect of sampling scheme on polymorphism patterns may itself provide important information about demographic history. PMID:22870403
Public Participation Guide: Citizen Juries
Citizen juries involve creating a “jury” a representative sample of citizens (usually selected in a random or stratified manner) who are briefed in detail on the background and current thinking relating to a particular issue or project.
OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.
Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui
2017-08-07
We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.
An Automated Scheme for the Large-Scale Survey of Herbig-Haro Objects
NASA Astrophysics Data System (ADS)
Deng, Licai; Yang, Ji; Zheng, Zhongyuan; Jiang, Zhaoji
2001-04-01
Owing to their spectral properties, Herbig-Haro (HH) objects can be discovered using photometric methods through a combination of filters, sampling the characteristic spectral lines and the nearby continuum. The data are commonly processed through direct visual inspection of the images. To make data reduction more efficient and the results more uniform and complete, an automated searching scheme for HH objects is developed to manipulate the images using IRAF. This approach helps to extract images with only intrinsic HH emissions. By using this scheme, the pointlike stellar sources and extended nebulous sources with continuum emission can be eliminated from the original images. The objects with only characteristic HH emission become prominent and can be easily picked up. In this paper our scheme is illustrated by a sample field and has been applied to our surveys for HH objects.
Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth
2017-02-16
With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.
Nunes, Rita G; Hajnal, Joseph V
2018-06-01
Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.
Analysis of rotary engine combustion processes based on unsteady, three-dimensional computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1990-01-01
A new computer code was developed for predicting the turbulent and chemically reacting flows with sprays occurring inside of a stratified charge rotary engine. The solution procedure is based on an Eulerian Lagrangian approach where the unsteady, three-dimensional Navier-Stokes equations for a perfect gas mixture with variable properties are solved in generalized, Eulerian coordinates on a moving grid by making use of an implicit finite volume, Steger-Warming flux vector splitting scheme, and the liquid phase equations are solved in Lagrangian coordinates. Both the details of the numerical algorithm and the finite difference predictions of the combustor flow field during the opening of exhaust and/or intake, and also during fuel vaporization and combustion, are presented.
Analysis of rotary engine combustion processes based on unsteady, three-dimensional computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1989-01-01
A new computer code was developed for predicting the turbulent, and chemically reacting flows with sprays occurring inside of a stratified charge rotary engine. The solution procedure is based on an Eulerian Lagrangian approach where the unsteady, 3-D Navier-Stokes equations for a perfect gas mixture with variable properties are solved in generalized, Eulerian coordinates on a moving grid by making use of an implicit finite volume, Steger-Warming flux vector splitting scheme, and the liquid phase equations are solved in Lagrangian coordinates. Both the details of the numerical algorithm and the finite difference predictions of the combustor flow field during the opening of exhaust and/or intake, and also during fuel vaporization and combustion, are presented.
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel
2015-08-01
Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maneva, Yana G.; Laguna, Alejandro Alvarez; Poedts, Stefaan
2017-02-20
In order to study chromospheric magnetosonic wave propagation including, for the first time, the effects of ion–neutral interactions in the partially ionized solar chromosphere, we have developed a new multi-fluid computational model accounting for ionization and recombination reactions in gravitationally stratified magnetized collisional media. The two-fluid model used in our 2D numerical simulations treats neutrals as a separate fluid and considers charged species (electrons and ions) within the resistive MHD approach with Coulomb collisions and anisotropic heat flux determined by Braginskiis transport coefficients. The electromagnetic fields are evolved according to the full Maxwell equations and the solenoidality of the magneticmore » field is enforced with a hyperbolic divergence-cleaning scheme. The initial density and temperature profiles are similar to VAL III chromospheric model in which dynamical, thermal, and chemical equilibrium are considered to ensure comparison to existing MHD models and avoid artificial numerical heating. In this initial setup we include simple homogeneous flux tube magnetic field configuration and an external photospheric velocity driver to simulate the propagation of MHD waves in the partially ionized reactive chromosphere. In particular, we investigate the loss of chemical equilibrium and the plasma heating related to the steepening of fast magnetosonic wave fronts in the gravitationally stratified medium.« less
Randomization in cancer clinical trials: permutation test and development of a computer program.
Ohashi, Y
1990-01-01
When analyzing cancer clinical trial data where the treatment allocation is done using dynamic balancing methods such as the minimization method for balancing the distribution of important prognostic factors in each arm, conservativeness occurs if such a randomization scheme is ignored and a simple unstratified analysis is carried out. In this paper, the above conservativeness is demonstrated by computer simulation, and the development of a computer program that carries out permutation tests of the log-rank statistics for clinical trial data where the allocation is done by the minimization method or a stratified permuted block design is introduced. We are planning to use this program in practice to supplement a usual stratified analysis and model-based methods such as the Cox regression. The most serious problem in cancer clinical trials in Japan is how to carry out the quality control or data management in trials that are initiated and conducted by researchers without support from pharmaceutical companies. In the final section of this paper, one international collaborative work for developing international guidelines on data management in clinical trials of bladder cancer is briefly introduced, and the differences between the system adopted in US/European statistical centers and the Japanese system is described. PMID:2269216
Ma, Jingdong; Xu, Juan; Zhang, Zhiguo; Wang, Jing
2016-05-04
Subsidizing healthcare costs through insurance schemes is crucial to overcome financial barriers to health care and to avoid high medical expenditures for patients in China. The health insurance could decrease financial risk by less out-of-pocket (OOP) payment, but not promise the protection equity. With the growth of New Cooperative Medical Scheme (NCMS) financing and coverage since 2008, the protection effectiveness and equity of the modified NCMS policies on financial burden should be further evaluated. A cross-sectional household survey was conducted in Zhejiang, Hubei, and Chongqing provinces by multi-stage stratified random sampling in 2011. A total of 1,525 households covered by the NCMS were analyzed. The protection effectiveness and protection equity of NCMS was analyzed by comparing the changes in health care utilization and medical expenditures, and the changes in the prevalence of catastrophic health expenditure (CHE) and its concentration indices (CIs) between pre- and post-NCMS reimbursement, respectively. The medical financial burden was still remarkably high for the low income rural residents in China due to high OOP payment, even after NCMS reimbursement. In Hubei province, the OOP payment of the poorest quintile was almost same as their households' annual expenditures. Even it was higher than their annual expenditures in Chongqing municipality. Effective reimbursement ratio of both outpatient and inpatient services were far lower than nominal reimbursement ratio originally designed by NCMS plans. After NCMS reimbursement, the prevalence of CHE was considerably high in all three provinces, and the absolute values of CIs were even higher than those before reimbursement, indicating the inequity exaggerated. Policymakers should further modify NCMS policy in rural China. The high OOP payment could be decreased by expanding the drug list and check directory for benefit package of NCMS to minimize the gap between nominal reimbursement ratio and effective reimbursement ratio. And the increase in medical expenditures should be controlled by monitoring excess demand from both medical service providers and patients, and changing fee-for-service payment for providers to a prospective payment system. Service accessibility and affordability for vulnerable rural residents should be protected by modifying regressive financing in NCMS, and by providing extra financial aid and reimbursement from government.
A novel, privacy-preserving cryptographic approach for sharing sequencing data
Cassa, Christopher A; Miller, Rachel A; Mandl, Kenneth D
2013-01-01
Objective DNA samples are often processed and sequenced in facilities external to the point of collection. These samples are routinely labeled with patient identifiers or pseudonyms, allowing for potential linkage to identity and private clinical information if intercepted during transmission. We present a cryptographic scheme to securely transmit externally generated sequence data which does not require any patient identifiers, public key infrastructure, or the transmission of passwords. Materials and methods This novel encryption scheme cryptographically protects participant sequence data using a shared secret key that is derived from a unique subset of an individual’s genetic sequence. This scheme requires access to a subset of an individual’s genetic sequence to acquire full access to the transmitted sequence data, which helps to prevent sample mismatch. Results We validate that the proposed encryption scheme is robust to sequencing errors, population uniqueness, and sibling disambiguation, and provides sufficient cryptographic key space. Discussion Access to a set of an individual’s genotypes and a mutually agreed cryptographic seed is needed to unlock the full sequence, which provides additional sample authentication and authorization security. We present modest fixed and marginal costs to implement this transmission architecture. Conclusions It is possible for genomics researchers who sequence participant samples externally to protect the transmission of sequence data using unique features of an individual’s genetic sequence. PMID:23125421
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
NASA Astrophysics Data System (ADS)
Ani, Adi Irfan Che; Sairi, Ahmad; Tawil, Norngainy Mohd; Wahab, Siti Rashidah Hanum Abd; Razak, Muhd Zulhanif Abd
2016-08-01
High demand for housing and limited land in town area has increasing the provision of high-rise residential scheme. This type of housing has different owners but share the same land lot and common facilities. Thus, maintenance works of the buildings and common facilities must be well organized. The purpose of this paper is to identify and classify basic facilities for high-rise residential building hoping to improve the management of the scheme. The method adopted is a survey on 100 high-rise residential schemes that ranged from affordable housing to high cost housing by using a snowball sampling. The scope of this research is within Kajang area, which is rapidly developed with high-rise housing. The objective of the survey is to list out all facilities in every sample of the schemes. The result confirmed that pre-determined 11 classifications hold true and can provide the realistic classification for high-rise residential scheme. This paper proposed for redefinition of facilities provided to create a better management system and give a clear definition on the type of high-rise residential based on its facilities.
Cai, J.; Powell, R.D.; Cowan, E.A.; Carlson, P.R.
1997-01-01
High-resolution seismic-reflection profiles of sediment fill within Tart Inlet of Glacier Bay, Alaska, show seismic facies changes with increasing distance from the glacial termini. Five types of seismic facies are recognized from analysis of Huntec and minisparker records, and seven lithofacies are determined from detailed sedimentologic study of gravity-, vibro- and box-cores, and bottom grab samples. Lithofacies and seismic facies associations, and fjord-floor morphology allow us to divide the fjord into three sedimentary environments: ice-proximal, iceberg-zone and ice-distal. The ice-proximal environment, characterized by a morainal-bank depositional system, can be subdivided into bank-back, bank-core and bank-front subenvironments, each of which is characterized by a different depositional subsystem. A bank-back subsystem shows chaotic seismic facies with a mounded surface, which we infer consists mainly of unsorted diamicton and poorly sorted coarse-grained sediments. A bank-core depositional subsystem is a mixture of diamicton, rubble, gravel, sand and mud. Seismic-reflection records of this subsystem are characterized by chaotic seismic facies with abundant hyperbolic diffractions and a hummocky surface. A bank-front depositional subsystem consists of mainly stratified and massive sand, and is characterized by internal hummocky facies on seismic-reflection records with significant surface relief and sediment gravity flow channels. The depositional system formed in the iceberg-zone environment consists of rhythmically laminated mud interbedded with thin beds of weakly stratified diamicton and stratified or massive sand and silt. On seismic-reflection profiles, this depositional system is characterized by discontinuously stratified facies with multiple channels on the surface in the proximal zone and a single channel on the largely flat sediment surface in the distal zone. The depositional system formed in the ice-distal environment consists of interbedded homogeneous or laminated mud and massive or stratified sand and coarse silt. This depositional system shows continuously stratified seismic facies with smooth and flat surfaces on minisparker records, and continuously stratified seismic facies which are interlayered with thin weakly stratified facies on Huntec records.
Jacob, Benjamin G; Novak, Robert J; Toe, Laurent; Sanfo, Moussa S; Afriyie, Abena N; Ibrahim, Mohammed A; Griffith, Daniel A; Unnasch, Thomas R
2012-01-01
The standard methods for regression analyses of clustered riverine larval habitat data of Simulium damnosum s.l. a major black-fly vector of Onchoceriasis, postulate models relating observational ecological-sampled parameter estimators to prolific habitats without accounting for residual intra-cluster error correlation effects. Generally, this correlation comes from two sources: (1) the design of the random effects and their assumed covariance from the multiple levels within the regression model; and, (2) the correlation structure of the residuals. Unfortunately, inconspicuous errors in residual intra-cluster correlation estimates can overstate precision in forecasted S.damnosum s.l. riverine larval habitat explanatory attributes regardless how they are treated (e.g., independent, autoregressive, Toeplitz, etc). In this research, the geographical locations for multiple riverine-based S. damnosum s.l. larval ecosystem habitats sampled from 2 pre-established epidemiological sites in Togo were identified and recorded from July 2009 to June 2010. Initially the data was aggregated into proc genmod. An agglomerative hierarchical residual cluster-based analysis was then performed. The sampled clustered study site data was then analyzed for statistical correlations using Monthly Biting Rates (MBR). Euclidean distance measurements and terrain-related geomorphological statistics were then generated in ArcGIS. A digital overlay was then performed also in ArcGIS using the georeferenced ground coordinates of high and low density clusters stratified by Annual Biting Rates (ABR). This data was overlain onto multitemporal sub-meter pixel resolution satellite data (i.e., QuickBird 0.61m wavbands ). Orthogonal spatial filter eigenvectors were then generated in SAS/GIS. Univariate and non-linear regression-based models (i.e., Logistic, Poisson and Negative Binomial) were also employed to determine probability distributions and to identify statistically significant parameter estimators from the sampled data. Thereafter, Durbin-Watson test statistics were used to test the null hypothesis that the regression residuals were not autocorrelated against the alternative that the residuals followed an autoregressive process in AUTOREG. Bayesian uncertainty matrices were also constructed employing normal priors for each of the sampled estimators in PROC MCMC. The residuals revealed both spatially structured and unstructured error effects in the high and low ABR-stratified clusters. The analyses also revealed that the estimators, levels of turbidity and presence of rocks were statistically significant for the high-ABR-stratified clusters, while the estimators distance between habitats and floating vegetation were important for the low-ABR-stratified cluster. Varying and constant coefficient regression models, ABR- stratified GIS-generated clusters, sub-meter resolution satellite imagery, a robust residual intra-cluster diagnostic test, MBR-based histograms, eigendecomposition spatial filter algorithms and Bayesian matrices can enable accurate autoregressive estimation of latent uncertainity affects and other residual error probabilities (i.e., heteroskedasticity) for testing correlations between georeferenced S. damnosum s.l. riverine larval habitat estimators. The asymptotic distribution of the resulting residual adjusted intra-cluster predictor error autocovariate coefficients can thereafter be established while estimates of the asymptotic variance can lead to the construction of approximate confidence intervals for accurately targeting productive S. damnosum s.l habitats based on spatiotemporal field-sampled count data.
A System Approach to Navy Medical Education and Training. Appendix 18. Radiation Technician.
1974-08-31
attrition was forecast to approximate twenty percent, final sample and sub-sample sizes were adjusted accordingly. Stratified random sampling... HYPERTENSIVE INTRAVENOUS PYELOGRAMS 2 ITAKE RENAL LOOPOGRAMI I 3 ITAKE CIXU, I.Eo CONSTANT INFUSION 4 10 RENAL SPLIT FUNCTION TEST, E.G. STAMEY 5...ITAKE PORTAL FILM OF AREA BEING TREATED WITH COBALT 32 [INFORM DOCTOR OF UNEXPECTED X-RAY FINDINGS 33 IREAD X-RAY FILMS FOR TECHNICAL ADEQUACY 34
NASA Astrophysics Data System (ADS)
Doin, Marie-Pierre; Lasserre, Cécile; Peltzer, Gilles; Cavalié, Olivier; Doubre, Cécile
2010-05-01
The main limiting factor on the accuracy of Interferometric SAR measurements (InSAR) comes from phase propagation delays through the troposphere. The delay can be divided into a stratified component, which correlates with the topography and often dominates the tropospheric signal, and a turbulent component. We use Global Atmospheric Models (GAM) to estimate the stratified phase delay and delay-elevation ratio at epochs of SAR acquisitions, and compare them to observed phase delay derived from SAR interferograms. Three test areas are selected with different geographic and climatic environments and with large SAR archive available. The Lake Mead, Nevada, USA is covered by 79 ERS1/2 and ENVISAT acquisitions, the Haiyuan Fault area, Gansu, China, by 24 ERS1/2 acquisitions, and the Afar region, Republic of Djibouti, by 91 Radarsat acquisitions. The hydrostatic and wet stratified delays are computed from GAM as a function of atmospheric pressure P, temperature T, and water vapor partial pressure e vertical profiles. The hydrostatic delay, which depends on ratio P/T, varies significantly at low elevation and cannot be neglected. The wet component of the delay depends mostly on the near surface specific humidity. GAM predicted delay-elevation ratios are in good agreement with the ratios derived from InSAR data away from deforming zones. Both estimations of the delay-elevation ratio can thus be used to perform a first order correction of the observed interferometric phase to retrieve a ground motion signal of low amplitude. We also demonstrate that aliasing of daily and seasonal variations in the stratified delay due to uneven sampling of SAR data significantly bias InSAR data stacks or time series produced after temporal smoothing. In all three test cases, the InSAR data stacks or smoothed time series present a residual stratified delay of the order of the expected deformation signal. In all cases, correcting interferograms from the stratified delay removes all these biases. We quantify the standard error associated with the correction of the stratified atmospheric delay. It varies from one site to another depending on the prevailing atmospheric conditions, but remains bounded by the standard deviation of the daily fluctuations of the stratified delay around the seasonal average. Finally we suggest that the phase delay correction can potentially be improved by introducing a non-linear dependence to the elevation derived from GAM.
NASA Astrophysics Data System (ADS)
Doin, M.-P.; Lasserre, C.; Peltzer, G.; Cavalié, O.; Doubre, C.
2009-09-01
The main limiting factor on the accuracy of Interferometric SAR measurements (InSAR) comes from phase propagation delays through the troposphere. The delay can be divided into a stratified component, which correlates with the topography and often dominates the tropospheric signal, and a turbulent component. We use Global Atmospheric Models (GAM) to estimate the stratified phase delay and delay-elevation ratio at epochs of SAR acquisitions, and compare them to observed phase delay derived from SAR interferograms. Three test areas are selected with different geographic and climatic environments and with large SAR archive available. The Lake Mead, Nevada, USA is covered by 79 ERS1/2 and ENVISAT acquisitions, the Haiyuan Fault area, Gansu, China, by 24 ERS1/2 acquisitions, and the Afar region, Republic of Djibouti, by 91 Radarsat acquisitions. The hydrostatic and wet stratified delays are computed from GAM as a function of atmospheric pressure P, temperature T, and water vapor partial pressure e vertical profiles. The hydrostatic delay, which depends on ratio P/ T, varies significantly at low elevation and cannot be neglected. The wet component of the delay depends mostly on the near surface specific humidity. GAM predicted delay-elevation ratios are in good agreement with the ratios derived from InSAR data away from deforming zones. Both estimations of the delay-elevation ratio can thus be used to perform a first order correction of the observed interferometric phase to retrieve a ground motion signal of low amplitude. We also demonstrate that aliasing of daily and seasonal variations in the stratified delay due to uneven sampling of SAR data significantly bias InSAR data stacks or time series produced after temporal smoothing. In all three test cases, the InSAR data stacks or smoothed time series present a residual stratified delay of the order of the expected deformation signal. In all cases, correcting interferograms from the stratified delay removes all these biases. We quantify the standard error associated with the correction of the stratified atmospheric delay. It varies from one site to another depending on the prevailing atmospheric conditions, but remains bounded by the standard deviation of the daily fluctuations of the stratified delay around the seasonal average. Finally we suggest that the phase delay correction can potentially be improved by introducing a non-linear dependence to the elevation derived from GAM.
NEKTON-HABITAT ASSOCIATIONS IN A PACIFIC NORTHWEST (USA) ESTUARY
Nekton−habitat associations were determined in Yaquina Bay, Oregon, United States, using a stratified-by-habitat, random, estuary-wide sampling design. Three habitats (intertidal eelgrass [Zostera marina], mud shrimp [Upogebia pugettensis], and ghost shrimp [Neotrypaea californie...
Hawkins, K A; Tulsky, D S
2001-11-01
Since memory performance expectations may be IQ-based, unidirectional base rate data for IQ-Memory Score discrepancies are provided in the WAIS-III/WMS-III Technical Manual. The utility of these data partially rests on the assumption that discrepancy base rates do not vary across ability levels. FSIQ stratified base rate data generated from the standardization sample, however, demonstrate substantial variability across the IQ spectrum. A superiority of memory score over FSIQ is typical at lower IQ levels, whereas the converse is true at higher IQ levels. These data indicate that the use of IQ-memory score unstratified "simple difference" tables could lead to erroneous conclusions for clients with low or high IQ. IQ stratified standardization base rate data are provided as a complement to the "predicted difference" method detailed in the Technical Manual.
SAS procedures for designing and analyzing sample surveys
Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.
2003-01-01
Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).
Helgeland, Jon; Kristoffersen, Doris Tove; Skyrud, Katrine Damgaard; Lindman, Anja Schou
2016-01-01
The purpose of this study was to assess the validity of patient administrative data (PAS) for calculating 30-day mortality after hip fracture as a quality indicator, by a retrospective study of medical records. We used PAS data from all Norwegian hospitals (2005-2009), merged with vital status from the National Registry, to calculate 30-day case-mix adjusted mortality for each hospital (n = 51). We used stratified sampling to establish a representative sample of both hospitals and cases. The hospitals were stratified according to high, low and medium mortality of which 4, 3, and 5 hospitals were sampled, respectively. Within hospitals, cases were sampled stratified according to year of admission, age, length of stay, and vital 30-day status (alive/dead). The final study sample included 1043 cases from 11 hospitals. Clinical information was abstracted from the medical records. Diagnostic and clinical information from the medical records and PAS were used to define definite and probable hip fracture. We used logistic regression analysis in order to estimate systematic between-hospital variation in unmeasured confounding. Finally, to study the consequences of unmeasured confounding for identifying mortality outlier hospitals, a sensitivity analysis was performed. The estimated overall positive predictive value was 95.9% for definite and 99.7% for definite or probable hip fracture, with no statistically significant differences between hospitals. The standard deviation of the additional, systematic hospital bias in mortality estimates was 0.044 on the logistic scale. The effect of unmeasured confounding on outlier detection was small to moderate, noticeable only for large hospital volumes. This study showed that PAS data are adequate for identifying cases of hip fracture, and the effect of unmeasured case mix variation was small. In conclusion, PAS data are adequate for calculating 30-day mortality after hip-fracture as a quality indicator in Norway.
ERIC Educational Resources Information Center
Beran, Tanya N.; Lupart, Judy
2009-01-01
The relationship between school achievement and peer harassment was examined using individual and peer characteristics as mediating factors. The sample consisted of adolescents age 12-15 years (n = 4,111) drawn from the Canadian National Longitudinal Survey of Children and Youth, which is a stratified random sample of 22,831 households in Canada.…
Reduction of Racial Disparities in Prostate Cancer
2005-12-01
erectile dysfunction , and female sexual dysfunction ). Wherever possible, the questions and scales employed on BACH were selected from published...Methods. A racially and ethnically diverse community-based survey of adults aged 30-79 years in Boston, Massachusetts. The BACH survey has...recruited adults in three racial/ethnic groups: Latino, African American, and White using a stratified cluster sample. The target sample size is equally
The Effective Management of Primary Schools in Ekiti State, Nigeria: An Analytical Assessment
ERIC Educational Resources Information Center
Adeyemi, T. O.
2009-01-01
This study investigated the management of education in primary schools in Ekiti State, Nigeria. As a correlational research, the study population comprised all the 694 primary schools in the State. Out of this, a sample of 320 schools was selected through the stratified random sampling technique. Two instruments were used to collect data for the…
A Validation Study of the Revised Personal Safety Decision Scale
ERIC Educational Resources Information Center
Kim, HaeJung; Hopkins, Karen M.
2017-01-01
Objective: The purpose of this study is to examine the reliability and validity of an 11-item Personal Safety Decision Scale (PSDS) in a sample of child welfare workers. Methods: Data were derived from a larger cross-sectional online survey to a random stratified sample of 477 public child welfare workers in a mid-Atlantic State. An exploratory…
ERIC Educational Resources Information Center
Ward, Martha Szegda
The long-term effectiveness of the North Carolina Basic Education Summer School Program (BEP) was examined. North Carolina has instituted a testing and summer remediation program for academically at-risk students at grades 3, 6, and 8. The BEP sample was obtained by a stratified random sampling of schools in North Carolina. Results were…
ERIC Educational Resources Information Center
Clark, Sheldon B.; Nichols, James O.
Survey data concerning teacher education program graduates were used to demonstrate the advantages of a stratified random sampling approach, with followup, relative to a one-shot mailing to an entire population. Sampling issues involved in such an approach are addressed, particularly with regard to quantifying the effects of nonresponse on the…
ERIC Educational Resources Information Center
Nam, Yunju; Mason, Lisa Reyes; Kim, Youngmi; Clancy, Margaret; Sherraden, Michael
2013-01-01
This study examined whether and how survey response differs by race and Hispanic origin, using data from birth certificates and survey administrative data for a large-scale statewide experiment. The sample consisted of mothers of infants selected from Oklahoma birth certificates using a stratified random sampling method (N = 7,111). This study…
Preserving America's Investment in Human Capital: A Study of Public Higher Education, 1980.
ERIC Educational Resources Information Center
Minter, W. John; Bowen, Howard R.
Financial and educational trends in accredited public institutions of higher education were studied for the period 1976-79 with some data for earlier years and for 1979-80. The study was based on a stratified sample of 135 institutions of which 95 participated. The sample represented all parts of the public sector except autonomous professional…
ERIC Educational Resources Information Center
Hayford, Samuel K.; Ocansey, Frederick
2017-01-01
This study reports part of a national survey on sources of information, education and communication materials on HIV/AIDS available to students with visual impairments in residential, segregated, and integrated schools in Ghana. A multi-staged stratified random sampling procedure and a purposive and simple random sampling approach, where…
ERIC Educational Resources Information Center
Sebro, Negusse Yohannes; Goshu, Ayele Taye
2017-01-01
This study aims to explore Bayesian multilevel modeling to investigate variations of average academic achievement of grade eight school students. A sample of 636 students is randomly selected from 26 private and government schools by a two-stage stratified sampling design. Bayesian method is used to estimate the fixed and random effects. Input and…
Development of Creative Behavior Observation Form: A Study on Validity and Reliability
ERIC Educational Resources Information Center
Dere, Zeynep; Ömeroglu, Esra
2018-01-01
This study, Creative Behavior Observation Form was developed to assess creativity of the children. While the study group on the reliability and validity of Creative Behavior Observation Form was being developed, 257 children in total who were at the ages of 5-6 were used as samples with stratified sampling method. Content Validity Index (CVI) and…
Horowitz, Arthur J.; Clarke, Robin T.; Merten, Gustavo Henrique
2015-01-01
Since the 1970s, there has been both continuing and growing interest in developing accurate estimates of the annual fluvial transport (fluxes and loads) of suspended sediment and sediment-associated chemical constituents. This study provides an evaluation of the effects of manual sample numbers (from 4 to 12 year−1) and sample scheduling (random-based, calendar-based and hydrology-based) on the precision, bias and accuracy of annual suspended sediment flux estimates. The evaluation is based on data from selected US Geological Survey daily suspended sediment stations in the USA and covers basins ranging in area from just over 900 km2 to nearly 2 million km2 and annual suspended sediment fluxes ranging from about 4 Kt year−1 to about 200 Mt year−1. The results appear to indicate that there is a scale effect for random-based and calendar-based sampling schemes, with larger sample numbers required as basin size decreases. All the sampling schemes evaluated display some level of positive (overestimates) or negative (underestimates) bias. The study further indicates that hydrology-based sampling schemes are likely to generate the most accurate annual suspended sediment flux estimates with the fewest number of samples, regardless of basin size. This type of scheme seems most appropriate when the determination of suspended sediment concentrations, sediment-associated chemical concentrations, annual suspended sediment and annual suspended sediment-associated chemical fluxes only represent a few of the parameters of interest in multidisciplinary, multiparameter monitoring programmes. The results are just as applicable to the calibration of autosamplers/suspended sediment surrogates currently used to measure/estimate suspended sediment concentrations and ultimately, annual suspended sediment fluxes, because manual samples are required to adjust the sample data/measurements generated by these techniques so that they provide depth-integrated and cross-sectionally representative data.
Method and system for providing precise multi-function modulation
NASA Technical Reports Server (NTRS)
Davarian, Faramaz (Inventor); Sumida, Joe T. (Inventor)
1989-01-01
A method and system is disclosed which provides precise multi-function digitally implementable modulation for a communication system. The invention provides a modulation signal for a communication system in response to an input signal from a data source. A digitized time response is generated from samples of a time domain representation of a spectrum profile of a selected modulation scheme. The invention generates and stores coefficients for each input symbol in accordance with the selected modulation scheme. The output signal is provided by a plurality of samples, each sample being generated by summing the products of a predetermined number of the coefficients and a predetermined number of the samples of the digitized time response. In a specific illustrative implementation, the samples of the output signals are converted to analog signals, filtered and used to modulate a carrier in a conventional manner. The invention is versatile in that it allows for the storage of the digitized time responses and corresponding coefficient lookup table of a number of modulation schemes, any of which may then be selected for use in accordance with the teachings of the invention.
Imaging complex objects using learning tomography
NASA Astrophysics Data System (ADS)
Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri
2018-02-01
Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.
ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design
Wei, Zhenhua; Peng, Bo; Shen, Rui
2018-01-01
Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick
2014-09-15
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
On the use of transition matrix methods with extended ensembles.
Escobedo, Fernando A; Abreu, Charlles R A
2006-03-14
Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.
NASA Astrophysics Data System (ADS)
Astashev, M. E.; Belosludtsev, K. N.; Kharakoz, D. P.
2014-05-01
One of the most accurate methods for measuring the compressibility of liquids is resonance measurement of sound velocity in a fixed-length interferometer. This method combines high sensitivity, accuracy, and small sample volume of the test liquid. The measuring principle is to study the resonance properties of a composite resonator that contains a test liquid sample. Ealier, the phase-locked loop (PLL) scheme was used for this. In this paper, we propose an alternative measurement scheme based on digital analysis of harmonic signals, describe the implementation of this scheme using commercially available data acquisition modules, and give examples of test measurements with accuracy evaluations of the results.
Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B
2018-06-01
The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.
Stavelin, Anne; Riksheim, Berit Oddny; Christensen, Nina Gade; Sandberg, Sverre
2016-05-01
Providers of external quality assurance (EQA)/proficiency testing schemes have traditionally focused on evaluation of measurement procedures and participant performance and little attention has been given to reagent lot variation. The aim of the present study was to show the importance of reagent lot registration and evaluation in EQA schemes. Results from the Noklus (Norwegian Quality Improvement of Primary Care Laboratories) urine albumin/creatinine ratio (ACR) and prothrombin time international normalized ratio (INR) point-of-care EQA schemes from 2009-2015 were used as examples in this study. The between-participant CV for Afinion ACR increased from 6%-7% to 11% in 3 consecutive surveys. This increase was caused by differences between albumin reagent lots that were also observed when fresh urine samples were used. For the INR scheme, the CoaguChek INR results increased with the production date of the reagent lots, with reagent lot medians increasing from 2.0 to 2.5 INR and from 2.7 to 3.3 INR (from the oldest to the newest reagent lot) for 2 control levels, respectively. These differences in lot medians were not observed when native patient samples were used. Presenting results from different reagent lots in EQA feedback reports can give helpful information to the participants that may explain their deviant EQA results. Information regarding whether the reagent lot differences found in the schemes can affect patient samples is important and should be communicated to the participants as well as to the manufacturers. EQA providers should consider registering and evaluating results from reagent lots. © 2016 American Association for Clinical Chemistry.
Guo, Junqi; Zhou, Xi; Sun, Yunchuan; Ping, Gong; Zhao, Guoxing; Li, Zhuorong
2016-06-01
Smartphone based activity recognition has recently received remarkable attention in various applications of mobile health such as safety monitoring, fitness tracking, and disease prediction. To achieve more accurate and simplified medical monitoring, this paper proposes a self-learning scheme for patients' activity recognition, in which a patient only needs to carry an ordinary smartphone that contains common motion sensors. After the real-time data collection though this smartphone, we preprocess the data using coordinate system transformation to eliminate phone orientation influence. A set of robust and effective features are then extracted from the preprocessed data. Because a patient may inevitably perform various unpredictable activities that have no apriori knowledge in the training dataset, we propose a self-learning activity recognition scheme. The scheme determines whether there are apriori training samples and labeled categories in training pools that well match with unpredictable activity data. If not, it automatically assembles these unpredictable samples into different clusters and gives them new category labels. These clustered samples combined with the acquired new category labels are then merged into the training dataset to reinforce recognition ability of the self-learning model. In experiments, we evaluate our scheme using the data collected from two postoperative patient volunteers, including six labeled daily activities as the initial apriori categories in the training pool. Experimental results demonstrate that the proposed self-learning scheme for activity recognition works very well for most cases. When there exist several types of unseen activities without any apriori information, the accuracy reaches above 80 % after the self-learning process converges.
Gurney, J C; Ansari, E; Harle, D; O'Kane, N; Sagar, R V; Dunne, M C M
2018-02-09
To determine the accuracy of a Bayesian learning scheme (Bayes') applied to the prediction of clinical decisions made by specialist optometrists in relation to the referral refinement of chronic open angle glaucoma. This cross-sectional observational study involved collection of data from the worst affected or right eyes of a consecutive sample of cases (n = 1,006) referred into the West Kent Clinical Commissioning Group Community Ophthalmology Team (COT) by high street optometrists. Multilevel classification of each case was based on race, sex, age, family history of chronic open angle glaucoma, reason for referral, Goldmann Applanation Tonometry (intraocular pressure and interocular asymmetry), optic nerve head assessment (vertical size, cup disc ratio and interocular asymmetry), central corneal thickness and visual field analysis (Hodapp-Parrish-Anderson classification). Randomised stratified tenfold cross-validation was applied to determine the accuracy of Bayes' by comparing its output to the clinical decisions of three COT specialist optometrists; namely, the decision to discharge, follow-up or refer each case. Outcomes of cross-validation, expressed as means and standard deviations, showed that the accuracy of Bayes' was high (95%, 2.0%) but that it falsely discharged (3.4%, 1.6%) or referred (3.1%, 1.5%) some cases. The results indicate that Bayes' has the potential to augment the decisions of specialist optometrists.
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Tian, Z
Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less
Stekl, Peter J.; Flanagan, Sarah M.
1992-01-01
Communities in the lower Merrimack River basin and coastal river basins of southeastern New Hampshire are experiencing increased demands for water because of a rapid increase in population. The population in 1987 was 225,495 and is expected to increase by 30 percent during the next decade. As of 1987, five towns used the stratified-drift aquifers for municipal supply and withdrew an estimated 6 million gallons per day. Four towns used the bedrock aquifer for municipal supply and withdrew an average of 1 .6 million gallons per day. Stratified-drift deposits cover 78 of the 327 square miles of the study area. These deposits are generally less than 10 square miles in areal extent, and their saturated thickness ranges front less than 20 feet to as much as 100 feet . Transinissivity exceeds 4,000 square feet per day in several locations. Stratified-drift aquifers in the eastern part are predominantly small ice-contact deposits surrounded by marine sediments or till of low hydraulic conductivity. Stratified-drift aquifers in the western part consist of ice-contact and proglacial deposits that are large in areal extent and are commonly in contact with surface-water bodies. Five stratified-drift aquifers, in the towns of Derry, Windham, Kingston, North Hampton, and Greenland, have the greatest potential to supply additional amounts of water. Potential yields and contributing areas of hypothetical supply wells were estimated for an aquifer in Windham near Cobbetts Pond and for an aquifer in Kingston along the Powwow River by use of a method analogous to superposition in conjunction with a numerical ground-waterflow model. The potential yield is estimated to be 0 .6 million gallons per day for the Windham-Cobbetts Pond aquifer and 4 .0 million gallons per day for the Kingston-Powwow River aquifer. Contributing recharge area for supply wells is estimated to be 1.6 square miles in the Windham-Cobbetts Pond aquifer and 4.9 square miles in the Kingston-Powwow River aquifer. Analyses of water samples from 30 wells indicate that the water quality in the basins studied is generally suitable for drinking and other domestic purposes. Concentrations of iron and manganese exceeded the U.S . Environmental Protection Agency's (USEPA) and the New Hampshire Water Supply Engineering Bureau's secondary maximum contaminant levels for drinking water in 20 samples. With one exception, concentrations of volatile organic compounds at all wells sampled met New Hampshire Water Supply and Engineering Bureau's drinking-water standards. At one well, trichloroethylene was detected at a concentration of 5.7 micrograms per liter. Ground-water contamination has been detected at several hazardous-waste sites in the study area. Currently, 5 sites are on the USEPA's National Priority List of superfund sites, 10 sites are Resource Conservation and Recovery Act of 1976 sites, and 1 site is a Department of Defense hazardous-waste site of stratigraphic layers is a product of a material's density and the velocity at which sound travels through that material . The reflected signals return to the hydrophones at the water surface and are then filtered, amplified, and displayed graphically on the chart recorder to allow interpretation of aquifer stratigraphy and bedrock depths. Lithologic data from nearby wells and test holes were used as control points to check the interpretation of the reflection profiles. Test drilling was done at 66 locations (pls . 1-3) to determine sediment grain size, stratigraphy, depth to water table, depth to bedrock, and ground water quality . A 6-inch-diameter, hollow-stem auger was used for test drilling . Split-spoon samples of subsurface materials collected at specific depths were used to evaluate the grain-size characteristics and identify the stratigraphic sequence of materials comprising the aquifers . Thirty-eight test holes cased with a 2-inch-diameter polyvinyl-chloride (PVC) pipe and slotted screens were used to make ground-water-level measurements and collect ground-water-quality samples. Surface-water-discharge measurements were made at 16 sites during low flow when the surface water is primarily ground-water discharge . These low-flow measurements indicate quantities of ground water potentially available from aquifers. Hydraulic conductivities of aquifer materials were estimated from grain-size-distribution data from 61 samples of stratified drift . Transmissivity was estimated from well logs by assigning hydraulic conductivity to specific well-log intervals, multiplying by the saturated thickness of the interval, and summing the results . Additional transmissivity values were obtained from an analysis of specific capacity and aquifer-test data. Long-term aquifer yields and contributing areas to hypothetical supply wells were estimated by application of a method that is analogous to super position and incorporates a ground-water-flow model developed by McDonald and Harbaugh (1988) . This method was applied to two aquifers judged to have the best potential for providing additional ground-water supplies. Samples of ground water from 26 test wells and 4 municipal wells were collected in March and August 1987 for analysis of common inorganic, organic, and volatile organic constituents. Methods for collecting and analyzing the samples are described by Fishman and Freidman (1989) . The water-quality results from the well samples were used to characterize background water quality in the stratified-drift aquifers.
Maisano Delser, Pierpaolo; Corrigan, Shannon; Hale, Matthew; Li, Chenhong; Veuille, Michel; Planes, Serge; Naylor, Gavin; Mona, Stefano
2016-01-01
Population genetics studies on non-model organisms typically involve sampling few markers from multiple individuals. Next-generation sequencing approaches open up the possibility of sampling many more markers from fewer individuals to address the same questions. Here, we applied a target gene capture method to deep sequence ~1000 independent autosomal regions of a non-model organism, the blacktip reef shark (Carcharhinus melanopterus). We devised a sampling scheme based on the predictions of theoretical studies of metapopulations to show that sampling few individuals, but many loci, can be extremely informative to reconstruct the evolutionary history of species. We collected data from a single deme (SID) from Northern Australia and from a scattered sampling representing various locations throughout the Indian Ocean (SCD). We explored the genealogical signature of population dynamics detected from both sampling schemes using an ABC algorithm. We then contrasted these results with those obtained by fitting the data to a non-equilibrium finite island model. Both approaches supported an Nm value ~40, consistent with philopatry in this species. Finally, we demonstrate through simulation that metapopulations exhibit greater resilience to recent changes in effective size compared to unstructured populations. We propose an empirical approach to detect recent bottlenecks based on our sampling scheme. PMID:27651217
Maisano Delser, Pierpaolo; Corrigan, Shannon; Hale, Matthew; Li, Chenhong; Veuille, Michel; Planes, Serge; Naylor, Gavin; Mona, Stefano
2016-09-21
Population genetics studies on non-model organisms typically involve sampling few markers from multiple individuals. Next-generation sequencing approaches open up the possibility of sampling many more markers from fewer individuals to address the same questions. Here, we applied a target gene capture method to deep sequence ~1000 independent autosomal regions of a non-model organism, the blacktip reef shark (Carcharhinus melanopterus). We devised a sampling scheme based on the predictions of theoretical studies of metapopulations to show that sampling few individuals, but many loci, can be extremely informative to reconstruct the evolutionary history of species. We collected data from a single deme (SID) from Northern Australia and from a scattered sampling representing various locations throughout the Indian Ocean (SCD). We explored the genealogical signature of population dynamics detected from both sampling schemes using an ABC algorithm. We then contrasted these results with those obtained by fitting the data to a non-equilibrium finite island model. Both approaches supported an Nm value ~40, consistent with philopatry in this species. Finally, we demonstrate through simulation that metapopulations exhibit greater resilience to recent changes in effective size compared to unstructured populations. We propose an empirical approach to detect recent bottlenecks based on our sampling scheme.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Sampling procedures for throughfall monitoring: A simulation study
NASA Astrophysics Data System (ADS)
Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut
2010-01-01
What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.
ERIC Educational Resources Information Center
Alshamali, Mahmoud A.; Daher, Wajeeh M.
2016-01-01
This study aimed at identifying the levels of scientific reasoning of upper primary stage (grades 4-7) science teachers based on their use of a problem-solving strategy. The study sample (N = 138; 32 % male and 68 % female) was randomly selected using stratified sampling from an original population of 437 upper primary school teachers. The…
ERIC Educational Resources Information Center
Teva, Inmaculada; Bermudez, Maria Paz; Buela-Casal, Gualberto
2010-01-01
The aim of this study was to assess whether coping styles, social stress, and sexual sensation seeking were predictors of HIV/STD risk behaviours in adolescents. A representative sample of 4,456 female and male Spanish high school students aged 13 to 18 years participated. A stratified random sampling procedure was used. Self-report questionnaires…
ERIC Educational Resources Information Center
Bowen, Gary L.
This study investigated the relationship between soldiers' satisfaction with the environment for families in the Army and satisfaction with the military way of life. The report is based on a secondary analysis of the responses of a stratified random sample of 9,198 Army personnel, a sample that participated in the 1985 Department of Defense…
NASA Astrophysics Data System (ADS)
Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.
2016-12-01
Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in other regions, such as winter wheat in Pakistan, soybean in Argentina and soybean in the entire South America. Similar levels of accuracy and timeliness were achieved as in the US.
2001 traffic safety issues opinion survey.
DOT National Transportation Integrated Search
2002-02-01
As a means of determining public opinion on specific traffic safety issues, a public opinion survey was conducted. A total of 4,500 mail surveys were sent to a stratified sample of drivers selected from the drivers license file. The state was divided...
European consensus conference for external quality assessment in molecular pathology.
van Krieken, J H; Siebers, A G; Normanno, N
2013-08-01
Molecular testing of tumor samples to guide treatment decisions is of increasing importance. Several drugs have been approved for treatment of molecularly defined subgroups of patients, and the number of agents requiring companion diagnostics for their prescription is expected to rapidly increase. The results of such testing directly influence the management of individual patients, with both false-negative and false-positive results being harmful for patients. In this respect, external quality assurance (EQA) programs are essential to guarantee optimal quality of testing. There are several EQA schemes available in Europe, but they vary in scope, size and execution. During a conference held in early 2012, medical oncologists, pathologists, geneticists, molecular biologists, EQA providers and representatives from pharmaceutical industries developed a guideline to harmonize the standards applied by EQA schemes in molecular pathology. The guideline comprises recommendations on the organization of an EQA scheme, defining the criteria for reference laboratories, requirements for EQA test samples and the number of samples that are needed for an EQA scheme. Furthermore, a scoring system is proposed and consequences of poor performance are formulated. Lastly, the contents of an EQA report, communication of the EQA results, EQA databases and participant manual are given.
Health literacy in old age: results of a German cross-sectional study.
Vogt, Dominique; Schaeffer, Doris; Messer, Melanie; Berens, Eva-Maria; Hurrelmann, Klaus
2017-03-22
Health literacy is especially important for older people to maintain or enhance remaining health resources and self-management skills. The aim of the study was to determine the level of health literacy and the association between health literacy, demographic and socio-economic factors in German older adults aged 65 years and above stratified by age group. Health literacy was assessed via computer-assisted personal interviews using HLS-EU-Q47 on a representative sample of the German-speaking population. Descriptive statistics, bivariate analyses and logistic regression modelling stratified by age group were conducted to assess health literacy of 475 respondents aged 65 years and above. Overall, 66.3% of all respondents aged 65 years and above had limited health literacy. Limited health literacy was especially prevalent among respondents above 76 years of age (80.6%). Limited health literacy was associated with financial deprivation (OR: 3.05; 95% CI: 1.99-4.67) and limited functional health literacy (OR: 2.16; 95% CI: 1.29-3.61). Financial deprivation was strongest predictor for limited health literacy in the total sample and stratified by age group. Limited health literacy is a frequent phenomenon in German adults aged 65 years and above. Research on health literacy in old age and the role in health disparities is urgently needed. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Direct Numerical Simulation of a Weakly Stratified Turbulent Wake
NASA Technical Reports Server (NTRS)
Redford, J. A.; Lund, T. S.; Coleman, Gary N.
2014-01-01
Direct numerical simulation (DNS) is used to investigate a time-dependent turbulent wake evolving in a stably stratified background. A large initial Froude number is chosen to allow the wake to become fully turbulent and axisymmetric before stratification affects the spreading rate of the mean defect. The uncertainty introduced by the finite sample size associated with gathering statistics from a simulation of a time-dependent flow is reduced, compared to earlier simulations of this flow. The DNS reveals the buoyancy-induced changes to the turbulence structure, as well as to the mean-defect history and the terms in the mean-momentum and turbulence-kinetic-energy budgets, that characterize the various states of this flow - namely the three-dimensional (essentially unstratified), non-equilibrium (or 'wake-collapse') and quasi-two-dimensional (or 'two-component') regimes observed elsewhere for wakes embedded in both weakly and strongly stratified backgrounds. The wake-collapse regime is not accompanied by transfer (or 'reconversion') of the potential energy of the turbulence to the kinetic energy of the turbulence, implying that this is not an essential feature of stratified-wake dynamics. The dependence upon Reynolds number of the duration of the wake-collapse period is demonstrated, and the effect of the details of the initial/near-field conditions of the wake on its subsequent development is examined.
Motivation of Citizen Scientists Participating in Moon Zoo
NASA Astrophysics Data System (ADS)
Brown, Shanique; Gay, P. L.; Daus, C. S.
2011-01-01
Moon Zoo is an online citizen science project with the aim of providing detailed crater counts for as much of the Moon's surface as possible. In addition to focusing on craters, volunteers are encouraged to remain vigilant for sightings of atypical features which may lead to new discoveries. Volunteers accomplish these tasks by exploring images captured by NASA's Lunar Reconnaissance Orbiter (LRO) which has a resolution of 50cm per pixel. To be successful, Moon Zoo needs to attract and retain a large population of citizen scientists. In this study, we examine the factors motivating Moon Zoo participants who invest many hours exploring these images. In this, the first of a two-phased study, we conducted a qualitative analysis using semi-structured interviews as a means of data collection. A stratified sample of participants was used in an attempt to uncover the driving forces behind decisions to participate from a wide-range of participants. Inquiring and probing questions were asked about factors which led volunteers to Moon Zoo as well as reasons which kept them committed to exploring the Moon's surface through this online portal. Responses were then categorized using a grounded theory approach, and frequency distributions are calculated where appropriate. Aggregate results from these interviews are presented here including the demographics of the sample and motivators as per the content analysis. The information gathered from this phase will be used to guide the development of an online survey to further explore volunteers’ motivation based on the presented classification schemes. The survey will then be used to guide future research and development in the area of citizen science in the field of astronomy. These findings will also be useful in charting new boundaries for future research.
Okyay, Ramazan Azim; Tanır, Ferdi; Ağaoğlu, Pelin Mutlu
2018-01-01
Among agricultural workers, especially in the seasonal migratory ones, housing and hygiene related issues, occupational accidents, low levels of education, poverty and absence of social security problems emerge as significant public health problems. This study aims to compare migrant-seasonal workers (MSWs) and resident agricultural workers (RAWs) in terms of socio-demographic characteristics and occupational health and safety in Adana, one of Turkey's most important agricultural cities. This cross-sectional study was conducted on RAWs and MSWs, aged 15-65, operating in the province of Adana. The calculated sample sizes for both MSWs and RAWs were distributed using stratified simple random sampling to five districts of Adana. The mean age of the 798 participating agricultural workers was 34.6 ± 14.2. Of the RAWs, 78.8% and of the MSWs 57.0% were male; 5.8% of RAWs and 32.8% of MSWs were illiterate. The mean number of people in the households of the participating workers was 5.1 for RAWs and 6.6 for MSWs. Of the RAWs, 20.5% were not covered by any social security scheme while this percentage was 35.1% in MSWs. RAWs worked 9.9 h a day while MSWs worked 10.9 h a day. Of the agricultural workers, 12.9% had injuries caused by occupational accidents. Agricultural workers, who are a large part of Turkey's economically active population, do not have healthy and safe working conditions. New regulations in the fields of social security, record keeping, monitoring, supervision, education and occupational health have been implemented recently to solve these problems. Despite the recent improvements there are still some problematic issues in the auditing of the necessary practices.
Imaz, María; Allassia, Sonia; Aranibar, Mónica; Gunia, Alba; Poggi, Susana; Togneri, Ana; Wolff, Lidia; Of Fluorescence, Group Of Implementation
2017-06-01
Light-emitting diode fluorescence microscopy (LED-FM) has been endorsed by the World Health Organization (WHO) for tuberculosis diagnosis, but its accuracy in HIV-infected patients remains controversial, and only some few studies have explored procedural factors that may affect its performance. To evaluate the performance of LED-FM for tuberculosis diagnosis in patients with and without HIV infection using a newer, less expensive LED lamp. We compared the performance of LED-FM and Ziehl-Neelsen (ZN) microscopy on respiratory specimen smears from tuberculosis (TB) suspects and patients on treatment examined by different technicians blinded for HIV-status and for the result of the comparative test. We analyzed the effect of concentrating specimens prior to microscopy using different examination schemes and user-appraisal of the LED device. Of the 6,968 diagnostic specimens collected, 869 (12.5%) had positive Mycobacterium tuberculosis cultures. LED-FM was 11.4% more sensitive than ZN (p;0.01). Among HIV-positive TB patients, sensitivity differences between LED-FM and ZN (20.6%) doubled the figure obtained in HIVnegative patients or in those with unknown HIV status (9.3%). After stratifying by direct and concentrated slides, the superiority of LED-FM remained. High specificity values were obtained both with LED-FM(99.9%) and ZN (99.9%).The second reading of a sample of slides showed a significantly higher positive detection yield using 200x magnification (49.4 %) than 400x magnification (33.8%) (p;0.05). The LEDdevice had a very good acceptance among the technicians. LED-FM better performance compared with ZN in HIV-infected patients and user-appraisal support the rapid roll-out of LED-FM. Screening at 200x magnification was essential to achieve LEDFM increased sensitivity.
Mucenic, M; Bandeira de Mello Brandao, A; Marroni, C A; Medeiros Fleck, A; Zanotelli, M L; Kiss, G; Meine, M H; Leipnitz, I; Soares Schlindwein, E; Martini, J; Costabeber, A M; Sacco, F K F; Cracco Cantisani, G P
2018-04-01
Treatment with direct-acting antiviral drugs in interferon-free regimens is currently recommended for viral hepatitis C recurrence after liver transplantation. There are limited data regarding its results in this population, and no optimal treatment scheme has yet been singled out. We report our real-world results in liver transplant (LT) recipients. All patients were hepatitis C virus (HCV) monoinfected and completed a 12-week treatment course, followed 12 weeks later by HCV polymerase chain reaction testing with 12 IU/mL sensibility. Liver fibrosis was graded with the use of biopsies taken <12 months before treatment and stratified as early (0-1) or moderate to advanced (2-4) according to the Metavir score. Median postoperative time was 5.2 years. Genotype 3 was found in 66.7% of the sample. The following regimens were prescribed: daclatasvir-sofosbuvir with (n = 11) or without (n = 28) ribavirin. Genotypes 1 and 3 were evenly distributed between the regimens. Sustained virologic response (SVR) was obtained in 24 out of 28 patients (85.7%) who received daclatasvir-sofosbuvir and in all patients (100%) who received daclatasvir-sofosbuvir-ribavirin (global SVR 89.7%). All patients that failed treatment had genotype 3 HCV. Fibrosis was evaluated in 79.5% of the sample: 48.4% had early and 51.6% had moderate to advanced fibrosis, for which ribavirin was more commonly prescribed (P = .001). The SVR rate in our LT recipients was similar to that previously reported in the literature. The addition of ribavirin to DAA treatment appears to be justified in this population. Copyright © 2018 Elsevier Inc. All rights reserved.
Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal
2009-01-01
The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455
Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
Goodin, Douglas S.; Jones, Jason; Li, David; Traboulsee, Anthony; Reder, Anthony T.; Beckmann, Karola; Konieczny, Andreas; Knappertz, Volker
2011-01-01
Context Establishing the long-term benefit of therapy in chronic diseases has been challenging. Long-term studies require non-randomized designs and, thus, are often confounded by biases. For example, although disease-modifying therapy in MS has a convincing benefit on several short-term outcome-measures in randomized trials, its impact on long-term function remains uncertain. Objective Data from the 16-year Long-Term Follow-up study of interferon-beta-1b is used to assess the relationship between drug-exposure and long-term disability in MS patients. Design/Setting To mitigate the bias of outcome-dependent exposure variation in non-randomized long-term studies, drug-exposure was measured as the medication-possession-ratio, adjusted up or down according to multiple different weighting-schemes based on MS severity and MS duration at treatment initiation. A recursive-partitioning algorithm assessed whether exposure (using any weighing scheme) affected long-term outcome. The optimal cut-point that was used to define “high” or “low” exposure-groups was chosen by the algorithm. Subsequent to verification of an exposure-impact that included all predictor variables, the two groups were compared using a weighted propensity-stratified analysis in order to mitigate any treatment-selection bias that may have been present. Finally, multiple sensitivity-analyses were undertaken using different definitions of long-term outcome and different assumptions about the data. Main Outcome Measure Long-Term Disability. Results In these analyses, the same weighting-scheme was consistently selected by the recursive-partitioning algorithm. This scheme reduced (down-weighted) the effectiveness of drug exposure as either disease duration or disability at treatment-onset increased. Applying this scheme and using propensity-stratification to further mitigate bias, high-exposure had a consistently better clinical outcome compared to low-exposure (Cox proportional hazard ratio = 0.30–0.42; p<0.0001). Conclusions Early initiation and sustained use of interferon-beta-1b has a beneficial impact on long-term outcome in MS. Our analysis strategy provides a methodological framework for bias-mitigation in the analysis of non-randomized clinical data. Trial Registration Clinicaltrials.gov NCT00206635 PMID:22140424
Detection of triglycerides using immobilized enzymes in food and biological samples
NASA Astrophysics Data System (ADS)
Raichur, Ashish; Lesi, Abiodun; Pedersen, Henrik
1996-04-01
A scheme for the determination of total triglyceride (fat) content in biomedical and food samples is being developed. The primary emphasis is to minimize the reagents used, simplify sample preparation and develop a robust system that would facilitate on-line monitoring. The new detection scheme developed thus far involves extracting triglycerides into an organic solvent (cyclohexane) and performing partial least squares (PLS) analysis on the NIR (1100 - 2500 nm) absorbance spectra of the solution. A training set using 132 spectra of known triglyceride mixtures was complied. Eight PLS calibrations were generated and were used to predict the total fat extracted from commercial samples such as mayonnaise, butter, corn oil and coconut oil. The results typically gave a correlation coefficient (r) of 0.99 or better. Predictions were typically within 90% and better at higher concentrations. Experiments were also performed using an immobilized lipase reactor to hydrolyze the fat extracted into the organic solvent. Performing PLS analysis on the difference spectra of the substrate and product could enhance specificity. This is being verified experimentally. Further work with biomedical samples is to be performed. This scheme may be developed into a feasible detection method for triglycerides in the biomedical and food industries.
... use a complex, stratified, multistage probability cluster sampling design. NHANES data collection is based on a nationally ... conjunction with the 2012 NHANES and the survey design was based on the design for NHANES, with ...
Dietary Supplement Use Among U.S. Adults Has Increased Since NHANES III (1988-1994)
... uses a complex, stratified, multistage probability cluster sampling design and oversamples in order to increase precision in estimates for certain groups. NHANES III was one in a series of periodic surveys conducted in two cycles during ...
Frequency of distracting tasks people do while driving : an analysis of the ACAS FOT data.
DOT National Transportation Integrated Search
2007-06-01
This report describes further analysis of data from the advanced collision avoidance system (ACAS) field operational test, a naturalistic driving study. To determine how distracted and nondistracted driving differ, a stratified sample of 2,914 video ...
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
Messing, Karen; Stock, Susan R; Tissot, France
2009-03-01
Several studies have reported male-female differences in the prevalence of symptoms of work-related musculoskeletal disorders (MSD), some arising from workplace exposure differences. The objective of this paper was to compare two strategies analyzing a single dataset for the relationships between risk factors and MSD in a population-based sample with a wide range of exposures. The 1998 Québec Health and Social Survey surveyed 11 735 respondents in paid work and reported "significant" musculoskeletal pain in 11 body regions during the previous 12 months and a range of personal, physical, and psychosocial risk factors. Five studies concerning risk factors for four musculoskeletal outcomes were carried out on these data. Each included analyses with multiple logistic regression (MLR) performed separately for women, men, and the total study population. The results from these gender-stratified and unstratified analyses were compared. In the unstratified MLR models, gender was significantly associated with musculoskeletal pain in the neck and lower extremities, but not with low-back pain. The gender-stratified MLR models identified significant associations between each specific musculoskeletal outcome and a variety of personal characteristics and physical and psychosocial workplace exposures for each gender. Most of the associations, if present for one gender, were also found in the total population. But several risk factors present for only one gender could be detected only in a stratified analysis, whereas the unstratified analysis added little information. Stratifying analyses by gender is necessary if a full range of associations between exposures and MSD is to be detected and understood.
Chiba, Yasutaka
2017-09-01
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chikezie, Paul Chidoka; Ojiako, Okey A.
2013-01-01
Objectives: The present study sought to investigate the role of palm oil, in conjunction with the duration of fermentation, on cyanide and aflatoxin (AFT) loads of processed cassava tubers (Garri). Materials and Methods: Matured cassava (Manihot esculenta Crantz) tubers were harvested from three different locations (Akunna, Mkporo-Oji and Durungwu) in Njaba Local Government Area, Imo State, Nigeria. The cassava tubers were processed into Garri according to standard schemes with required modifications and measured for cyanide content using titrimetric methods. Samples of Garri for determination of AFT levels were stored for 30 days before the commencement of spectrophotometric analysis. Results: Cyanide content of peeled cassava tubers was within the range of 4.07 ± 0.16-5.20 ± 0.19 mg hydrocyanic acid (HCN) equivalent/100 g wet weight, whereas the various processed cassava tubers was within the range of 1.44 ± 0.34-3.95 ± 0.23 mg HCN equivalents/100 g. For the 48 h fermentation scheme, Garri treated with palm oil exhibited marginal reduction in cyanide contents by 0.96%, 3.52% and 3.69%, whereas 4 h fermentation scheme is in concurrence with palm oil treatment caused 4.42%, 7.47% and 5.15% elimination of cyanide contents compared with corresponding untreated Garri samples (P > 0.05). Levels of AFT of the various Garri samples ranged between 0.26 ± 0.07 and 0.55 ± 0.04 ppb/100 g. There was no significant difference (P > 0.05) in AFT levels among the various samples in relation to their corresponding sources. Conclusion: The present study showed that the 48 h fermentation scheme for Garri production caused significant (P < 0.05) reduction, but did not obliterate the cyanide content of cassava tubers. Conversely, the 48 h fermentation scheme promoted the elevation of AFT levels, but was relatively reduced in Garri samples treated with palm oil. PMID:24403736
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Andrew T. Hudak; Jeffrey S. Evans; Nicholas L. Crookston; Michael J. Falkowski; Brant K. Steigers; Rob Taylor; Halli Hemingway
2008-01-01
Stand exams are the principal means by which timber companies monitor and manage their forested lands. Airborne LiDAR surveys sample forest stands at much finer spatial resolution and broader spatial extent than is practical on the ground. In this paper, we developed models that leverage spatially intensive and extensive LiDAR data and a stratified random sample of...
Robert H. McAlister; Alexander Clark; Joseph R. Saucier
1997-01-01
The effect of rotation age on strength and stiffness of lumber produced from unthinned loblolly pine stands in the Coastal Plain of Georgia was examined. Six stands representing 22-, 28-, and 40-year-old roations were sampled. A stratified random sample of trees 8 to 16 inches in diameter at breast height was selected from each stand and processed into lumber....
Diana Yemilet Avila Flores; Marco Aurelio González Tagle; Javier Jiménez Pérez; Oscar Aguirre Calderón; Eduardo Treviño Garza
2013-01-01
The objective of this research was to characterize the spatial structure patterns of a Pinus hartwegii forest in the Sierra Madre Oriental, affected by a fire in 1998. Sampling was stratified by fire severity. A total of three fire severity classes (low, medium and high) were defined. Three sample plots of 40m x 40m were established for each...
Bernard, Marie-Agnès; Bénichou, Jacques; Blin, Patrick; Weill, Alain; Bégaud, Bernard; Abouelfath, Abdelilah; Moore, Nicholas; Fourrier-Réglat, Annie
2012-06-01
To determine healthcare claim patterns associated using nonsteroidal anti-inflammatory drugs (NSAIDs) for rheumatoid arthritis (RA). The CADEUS study randomly identified NSAID users within the French health insurance database. One-year claims data were extracted, and NSAID indication was obtained from prescribers. Logistic regression was used in a development sample to identify claim patterns predictive of RA and models applied to a validation sample. Analyses were stratified on the dispensation of immunosuppressive agents or specific antirheumatism treatment, and the area under the receiver operating characteristic curve was used to estimate discriminant power. NSAID indication was provided for 26,259 of the 45,217 patients included in the CADEUS cohort; it was RA for 956 patients. Two models were constructed using the development sample (n = 13,143), stratifying on the dispensation of an immunosuppressive agent or specific antirheumatism treatment. Discriminant power was high for both models (AUC > 0.80) and was not statistically different from that found when applied to the validation sample (n = 13,116). The models derived from this study may help to identify patients prescribed NSAIDs who are likely to have RA in claims databases without medical data such as treatment indication. Copyright © 2012 John Wiley & Sons, Ltd.
Invited Article: Mask-modulated lensless imaging with multi-angle illuminations
NASA Astrophysics Data System (ADS)
Zhang, Zibang; Zhou, You; Jiang, Shaowei; Guo, Kaikai; Hoshino, Kazunori; Zhong, Jingang; Suo, Jinli; Dai, Qionghai; Zheng, Guoan
2018-06-01
The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community.
NASA Technical Reports Server (NTRS)
Nguyen, T. M.; Yeh, H.-G.
1993-01-01
The baseline design and implementation of the digital baseband architecture for advanced deep space transponders is investigated and identified. Trade studies on the selection of the number of bits for the analog-to-digital converter (ADC) and optimum sampling schemes are presented. In addition, the proposed optimum sampling scheme is analyzed in detail. Descriptions of possible implementations for the digital baseband (or digital front end) and digital phase-locked loop (DPLL) for carrier tracking are also described.
Lee, Jinah; Duy, Pham Khac; Yoon, Jihye; Chung, Hoeil
2014-06-21
A bead-incorporated transmission scheme (BITS) has been demonstrated for collecting reproducible transmission near-infrared (NIR) spectra of samples with inconsistent shapes. Isotropically diffused NIR radiation was applied around a sample and the surrounding radiation was allowed to interact homogeneously with the sample for transmission measurement. Samples were packed in 1.40 mm polytetrafluoroethylene (PTFE) beads, ideal diffusers without NIR absorption, and then transmission spectra were collected by illuminating the sample-containing beads using NIR radiation. When collimated radiation was directly applied, a small portion of the non-fully diffused radiation (NFDR) propagated through the void space of the packing and eventually degraded the reproducibility. Pre-diffused radiation was introduced by placing an additional PTFE disk in front of the packing to diminish NFDR, which produced more reproducible spectral features. The proposed scheme was evaluated by analyzing two different solid samples: density determination for individual polyethylene (PE) pellets and identification of mining locality for tourmalines. Because spectral collection was reproducible, the use of the spectrum acquired from one PE pellet was sufficient to accurately determine the density of nine other pellets with different shapes. The differentiation of tourmalines, which are even more dissimilar in appearance, according to their mining locality was also feasible with the help of the scheme.
Svanevik, Cecilie Smith; Roiha, Irja Sunde; Levsen, Arne; Lunestad, Bjørn Tore
2015-10-01
Microbes play an important role in the degradation of fish products, thus better knowledge of the microbiological conditions throughout the fish production chain may help to optimise product quality and resource utilisation. This paper presents the results of a ten-year spot sampling programme (2005-2014) of the commercially most important pelagic fish species harvested in Norway. Fish-, surface-, and storage water samples were collected from fishing vessels and processing factories. Totally 1,181 samples were assessed with respect to microbiological quality, hygiene and food safety. We introduce a quality and safety assessment scheme for fresh pelagic fish recommending limits for heterotrophic plate counts (HPC), thermos tolerant coliforms, enterococci and Listeria monocytogenes. According to the scheme, in 25 of 41 samplings, sub-optimal conditions were found with respect to quality, whereas in 21 and 9 samplings, samples were not in compliance concerning hygiene and food safety, respectively. The present study has revealed that the quality of pelagic fish can be optimised by improving the hygiene conditions at some critical points at an early phase of the production chain. Thus, the proposed assessment scheme may provide a useful tool for the industry to optimise quality and maintain consumer safety of pelagic fishery products. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Geospatial techniques for developing a sampling frame of watersheds across a region
Gresswell, Robert E.; Bateman, Douglas S.; Lienkaemper, George; Guy, T.J.
2004-01-01
Current land-management decisions that affect the persistence of native salmonids are often influenced by studies of individual sites that are selected based on judgment and convenience. Although this approach is useful for some purposes, extrapolating results to areas that were not sampled is statistically inappropriate because the sampling design is usually biased. Therefore, in recent investigations of coastal cutthroat trout (Oncorhynchus clarki clarki) located above natural barriers to anadromous salmonids, we used a methodology for extending the statistical scope of inference. The purpose of this paper is to apply geospatial tools to identify a population of watersheds and develop a probability-based sampling design for coastal cutthroat trout in western Oregon, USA. The population of mid-size watersheds (500-5800 ha) west of the Cascade Range divide was derived from watershed delineations based on digital elevation models. Because a database with locations of isolated populations of coastal cutthroat trout did not exist, a sampling frame of isolated watersheds containing cutthroat trout had to be developed. After the sampling frame of watersheds was established, isolated watersheds with coastal cutthroat trout were stratified by ecoregion and erosion potential based on dominant bedrock lithology (i.e., sedimentary and igneous). A stratified random sample of 60 watersheds was selected with proportional allocation in each stratum. By comparing watershed drainage areas of streams in the general population to those in the sampling frame and the resulting sample (n = 60), we were able to evaluate the how representative the subset of watersheds was in relation to the population of watersheds. Geospatial tools provided a relatively inexpensive means to generate the information necessary to develop a statistically robust, probability-based sampling design.
Li, Xiao-jun; Yi, Eugene C; Kemp, Christopher J; Zhang, Hui; Aebersold, Ruedi
2005-09-01
There is an increasing interest in the quantitative proteomic measurement of the protein contents of substantially similar biological samples, e.g. for the analysis of cellular response to perturbations over time or for the discovery of protein biomarkers from clinical samples. Technical limitations of current proteomic platforms such as limited reproducibility and low throughput make this a challenging task. A new LC-MS-based platform is able to generate complex peptide patterns from the analysis of proteolyzed protein samples at high throughput and represents a promising approach for quantitative proteomics. A crucial component of the LC-MS approach is the accurate evaluation of the abundance of detected peptides over many samples and the identification of peptide features that can stratify samples with respect to their genetic, physiological, or environmental origins. We present here a new software suite, SpecArray, that generates a peptide versus sample array from a set of LC-MS data. A peptide array stores the relative abundance of thousands of peptide features in many samples and is in a format identical to that of a gene expression microarray. A peptide array can be subjected to an unsupervised clustering analysis to stratify samples or to a discriminant analysis to identify discriminatory peptide features. We applied the SpecArray to analyze two sets of LC-MS data: one was from four repeat LC-MS analyses of the same glycopeptide sample, and another was from LC-MS analysis of serum samples of five male and five female mice. We demonstrate through these two study cases that the SpecArray software suite can serve as an effective software platform in the LC-MS approach for quantitative proteomics.
DESIGNING MONITORING AND ASSESSMENT STRATEGIES TO INCLUDE NEARSHORE ECOSYSTEMS OF THE GREAT LAKES
An expectation for monitoring and assessment of very large aquatic systems is that we can develop a strategy that recognizes and reports on ecologically-important subareas using spatially-stratified, probabilistic sampling designs. Ongoing efforts monitor the main-body, offshore ...
Education Needs of Michigan Farmers
ERIC Educational Resources Information Center
Suvedi, Murari; Jeong, Eunseong; Coombs, John
2010-01-01
In 2008 MSU Extension evaluated their program to identify the major areas of educational need for Michigan farmers and agribusiness operators. Surveys were mailed to a stratified random sample from Michigan Agricultural Statistics Service records of dairy, livestock, swine, cash crops, fruit, vegetable, and nursery/greenhouse producers. Findings…
New Mathematical Strategy Using Branch and Bound Method
NASA Astrophysics Data System (ADS)
Tarray, Tanveer Ahmad; Bhat, Muzafar Rasool
In this paper, the problem of optimal allocation in stratified random sampling is used in the presence of nonresponse. The problem is formulated as a nonlinear programming problem (NLPP) and is solved using Branch and Bound method. Also the results are formulated through LINGO.
Bakri, Barbara; Weimer, Marco; Hauck, Gerrit; Reich, Gabriele
2015-11-01
Scope of the study was (1) to develop a lean quantitative calibration for real-time near-infrared (NIR) blend monitoring, which meets the requirements in early development of pharmaceutical products and (2) to compare the prediction performance of this approach with the results obtained from stratified sampling using a sample thief in combination with off-line high pressure liquid chromatography (HPLC) and at-line near-infrared chemical imaging (NIRCI). Tablets were manufactured from powder blends and analyzed with NIRCI and HPLC to verify the real-time results. The model formulation contained 25% w/w naproxen as a cohesive active pharmaceutical ingredient (API), microcrystalline cellulose and croscarmellose sodium as cohesive excipients and free-flowing mannitol. Five in-line NIR calibration approaches, all using the spectra from the end of the blending process as reference for PLS modeling, were compared in terms of selectivity, precision, prediction accuracy and robustness. High selectivity could be achieved with a "reduced" approach i.e. API and time saving approach (35% reduction of API amount) based on six concentration levels of the API with three levels realized by three independent powder blends and the additional levels obtained by simply increasing the API concentration in these blends. Accuracy and robustness were further improved by combining this calibration set with a second independent data set comprising different excipient concentrations and reflecting different environmental conditions. The combined calibration model was used to monitor the blending process of independent batches. For this model formulation the target concentration of the API could be achieved within 3 min indicating a short blending time. The in-line NIR approach was verified by stratified sampling HPLC and NIRCI results. All three methods revealed comparable results regarding blend end point determination. Differences in both mean API concentration and RSD values could be attributed to differences in effective sample size and thief sampling errors. This conclusion was supported by HPLC and NIRCI analysis of tablets manufactured from powder blends after different blending times. In summary, the study clearly demonstrates the ability to develop efficient and robust quantitative calibrations for real-time NIR powder blend monitoring with a reduced set of powder blends while avoiding any bias caused by physical sampling. Copyright © 2015 Elsevier B.V. All rights reserved.
Availability of ground water in the Branch River basin; Providence County, Rhode Island
Johnston, H.E.; Dickerman, D.C.
1974-01-01
Stratified glacial drift consisting largely of sand and gravel constitutes the only aquifer capable of supporting continuous yields of 100 gpm (6.3 1/s) or more to individual wells. The aquifer covers about a third of the 79 mi 2 (205 km2) study area, occurring mainly in stream valleys that are less than a mi le wide. Its saturated thickness is commonly 40 to 60ft (12 to 18 m); its transmissivity is commonly 5,000 to 8,000 ft 2/day (460 to 740m2 /day). The aquifer is hydraulically connected to streams that cross it and much of the water from heavily pumped wells will consist of infiltration induced from them. Potential sustained yield from most parts of the aquifer is limited chiefly by the rate at which infiltration can be induced from streams or low streamflow, whichever is smaller. Ground-water withdrawals deplete streamflow; and if large-scale development of ground water is not carefully planned and managed, periods of no streamflow may result during dry weather. Potential sustained yield varies with the scheme of well development, and is evaluated for selected areas by mathematically simulating pumping from assumed schemes of well Is in models of the stream-aquifer system. Results indicate that sustained yields of 5.5, 3.4, 1.6, and 1.3 mgd (0.24, 0.15, 0.07, and 0.06 m3 /s) can be obtained from the stratified-drift aquifer near Slatersville, Oakland, Harrisville, and Chepachet, respectively. Pumping at these rates will not cause streams to go dry, if the water is returned to streams near points of withdrawal. A larger ground-water yield can be obtained, if periods of no streamflow along reaches of principal streams are acceptable. Inorganic chemical quality of water in the stream-aquifer system is suitable for most purposes; the water is soft, slightly acidic, and generally contains less than 100 milligrams per litre of dissolved sol ids. Continued good quality ground water depends on maintenance of good quality of water in streams, because much of the water pumped from wells will be infiltrated from streams.
A new processing scheme for ultra-high resolution direct infusion mass spectrometry data
NASA Astrophysics Data System (ADS)
Zielinski, Arthur T.; Kourtchev, Ivan; Bortolini, Claudio; Fuller, Stephen J.; Giorio, Chiara; Popoola, Olalekan A. M.; Bogialli, Sara; Tapparo, Andrea; Jones, Roderic L.; Kalberer, Markus
2018-04-01
High resolution, high accuracy mass spectrometry is widely used to characterise environmental or biological samples with highly complex composition enabling the identification of chemical composition of often unknown compounds. Despite instrumental advancements, the accurate molecular assignment of compounds acquired in high resolution mass spectra remains time consuming and requires automated algorithms, especially for samples covering a wide mass range and large numbers of compounds. A new processing scheme is introduced implementing filtering methods based on element assignment, instrumental error, and blank subtraction. Optional post-processing incorporates common ion selection across replicate measurements and shoulder ion removal. The scheme allows both positive and negative direct infusion electrospray ionisation (ESI) and atmospheric pressure photoionisation (APPI) acquisition with the same programs. An example application to atmospheric organic aerosol samples using an Orbitrap mass spectrometer is reported for both ionisation techniques resulting in final spectra with 0.8% and 8.4% of the peaks retained from the raw spectra for APPI positive and ESI negative acquisition, respectively.
Use of fundus autofluorescence images to predict geographic atrophy progression.
Bearelly, Srilaxmi; Khanifar, Aziz A; Lederer, David E; Lee, Jane J; Ghodasra, Jason H; Stinnett, Sandra S; Cousins, Scott W
2011-01-01
Fundus autofluorescence imaging has been shown to be helpful in predicting progression of geographic atrophy (GA) secondary to age-related macular degeneration. We assess the ability of fundus autofluorescence imaging to predict rate of GA progression using a simple categorical scheme. Subjects with GA secondary to age-related macular degeneration with fundus autofluorescence imaging acquired at least 12 months apart were included. Rim area focal hyperautofluorescence was defined as percentage of the 500-μm-wide margin bordering the GA that contained increased autofluorescence. Rim area focal hyperautofluorescence on baseline fundus autofluorescence images was assessed and categorized depending on the extent of rim area focal hyperautofluorescence (Category 1: ≤33%; Category 2: between 33 and 67%; Category 3: ≥67%). Total GA areas at baseline and follow-up were measured to calculate change in GA progression. Forty-five eyes of 45 subjects were included; average duration of follow-up was 18.5 months. Median growth rates differed among categories of baseline rim area focal hyperautofluorescence (P = 0.01 among Categories 1, 2, and 3; P = 0.008 for Category 1 compared with Category 3, Jonckheere-Terpstra test). A simple categorical scheme that stratifies the amount of increased autofluorescence in the 500-μm margin bordering GA may be used to differentiate faster and slower progressors.
Shanthi, C; Pappa, N
2017-05-01
Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Power of sign surveys to monitor population trend
Kendall, Katherine C.; Metzgar, Lee H.; Patterson, David A.; Steele, Brian M.
1992-01-01
The urgent need for an effective monitoring scheme for grizzly bear (Ursus arctos) populations led us to investigate the effort required to detect changes in populations of low—density dispersed animals, using sign (mainly scats and tracks) they leave on trails. We surveyed trails in Glacier National Park for bear tracks and scats during five consecutive years. Using these data, we modeled the occurrence of bear sign on trails, then estimated the power of various sampling schemes. Specifically, we explored the power of bear sign surveys to detect a 20% decline in sign occurrence. Realistic sampling schemes appear feasible if the density of sign is high enough, and we provide guidelines for designs with adequate replication to monitor long—term trends of dispersed populations using sign occurrences on trails.
The role of perceived barriers and objectively measured physical activity in adults aged 65-100.
Gellert, Paul; Witham, Miles D; Crombie, Iain K; Donnan, Peter T; McMurdo, Marion E T; Sniehotta, Falko F
2015-05-01
to test the predictive utility of perceived barriers to objectively measured physical activity levels in a stratified sample of older adults when accounting for social-cognitive determinants proposed by the Theory of Planned Behaviour (TPB), and economic and demographic factors. data were analysed from the Physical Activity Cohort Scotland survey, a representative and stratified (65-80 and 80+ years; deprived and affluent) sample of 584 community-dwelling older people, resident in Tayside, Scotland. Physical activity was measured objectively by accelerometry. perceived barriers clustered around the areas of poor health, lack of interest, lack of safety and lack of access. Perceived poor health and lack of interest, but not lack of access or concerns about personal safety, predicted physical activity after controlling for demographic, economic and TPB variables. perceived person-related barriers (poor health and lack of interest) seem to be more strongly associated with physical activity levels than perceived environmental barriers (safety and access) in a large sample of older adults. Perceived barriers are modifiable and may be a target for future interventions. © The Author 2015. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
A Scheme for Regrouping WISC-R Subtests.
ERIC Educational Resources Information Center
Groff, Martin G.; Hubble, Larry M.
1984-01-01
Reviews WISC-R factor analytic findings for developing a scheme for regrouping WISC-R subtests, consisting of verbal comprehension and spatial subtests. Subtests comprising these groupings are shown to have more common variance than specific variance and cluster together consistently across the samples of WISC-R scores. (Author/JAC)
Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.
Jung, Sin-Ho
2017-07-01
In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.
Computers in Public Education Study.
ERIC Educational Resources Information Center
HBJ Enterprises, Highland Park, NJ.
This survey conducted for the National Institute of Education reports the use of computers in U.S. public schools in the areas of instructional computing, student accounting, management of educational resources, research, guidance, testing, and library applications. From a stratified random sample of 1800 schools in varying geographic areas and…
Self-identity Changes and English Learning among Chinese Undergraduates.
ERIC Educational Resources Information Center
Yihong, Gao; Ying, Cheng; Yuan, Zhao; Yan, Zhou
2005-01-01
This quantitative study investigated Chinese college students' self-identity changes associated with English learning. The subjects were 2,278 undergraduates from 30 universities, obtained from a stratified sampling. Based on existing literature of bilinguals identities, the self-designed questionnaire defined six categories of self-identity…
Near-infrared spectroscopy used to predict soybean seed germination and vigor
USDA-ARS?s Scientific Manuscript database
The potential of using near-infrared (NIR) spectroscopy for differentiating levels in germination, vigor, and electrical conductivity of soybean seeds was investigated. For the 243 spectral data collected using the Perten DA7200, stratified sampling was used to obtain three calibration sets consisti...
Heredity and Environment in the Development of Intelligence
ERIC Educational Resources Information Center
Migliorino, Giuseppe
1974-01-01
Intelligence tests were administered to a stratified sample of 4058 school children from Palermo, Sicily. I.Q. scores were found to be positively correlated with socioeconomic status and negatively related to family size. As birth order increased, mental development decreased. Implications for future research were discussed. (EH)
Factors Affecting Retirement Attitude among Elementary School Teachers
ERIC Educational Resources Information Center
Hsu, Wan-Chen; Chiang, Chia-Hsun; Chuang, Hsueh-Hua
2015-01-01
This study investigated the relationships of teacher efficacy, perceived organizational control, and the teacher-student age gap with teachers' retirement attitudes. Stratified random sampling was adopted to collect survey responses. A total of 498 valid surveys from 33 elementary schools were collected. Correlational analyses revealed significant…
Singaporean Kindergartners' Phonological Awareness and English Writing Skills
ERIC Educational Resources Information Center
Dixon, L. Quentin
2011-01-01
This article describes the phonological awareness and English writing skills among a sample of 297 Singaporean kindergarten children, stratified by ethnicity (Chinese, Malay, and Indian), and examines the relationship between oral language and writing skills in this multilingual population. Overall, Singaporean kindergartners, nearly all of whom…
Multi-Sensory Intervention Observational Research
ERIC Educational Resources Information Center
Thompson, Carla J.
2011-01-01
An observational research study based on sensory integration theory was conducted to examine the observed impact of student selected multi-sensory experiences within a multi-sensory intervention center relative to the sustained focus levels of students with special needs. A stratified random sample of 50 students with severe developmental…
ERIC Educational Resources Information Center
Holmes, Erin Kramer; Dunn, KayLee C.; Harper, James; Dyer, W. Justin; Day, Randal D.
2013-01-01
We used structural equation modeling to explore associations between inhibitory maternal gatekeeping attitudes, reports of inhibitory maternal gatekeeping behaviors, maternal psychological control, observed mother-adolescent warmth, and adolescent reports of maternal involvement. Our random stratified sample consisted of 315 mothers and their…
Teachers' Perceptions of the Relevance and Usefulness of Professional Development
ERIC Educational Resources Information Center
Shoemaker, Susan F.
2013-01-01
The purpose of this qualitative, phenomenological study was to investigate, through interviews, secondary teachers' perceptions of the level of the value, applicability, and implementation of skills learned within professional development offerings in the targeted school district. Non-probability, stratified, purposeful sampling was utilized to…
Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N
2016-12-01
Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.
2015-05-12
Deficiencies That Affect the Reliability of Estimates ________________________________________6 Statistical Precision Could Be Improved... statistical precision of improper payments estimates in seven of the DoD payment programs through the use of stratified sample designs. DoD improper...payments not subject to sampling, which made the results statistically invalid. We made a recommendation to correct this problem in a previous report;4
Robert A. Slesak; Stephen H. Schoenholtz; Timothy B. Harrington; Nathan A. Meehan
2011-01-01
We assessed the effect of harvest type (bole-only or whole-tree) and vegetation control treatments (initial or annual application of herbicide) on soil C and N at two contrasting sites in the Pacific Northwest. Pretreatment (2003) and posttreatment (2005) soil samples were collected by depth to 60 cm, and a stratified sampling approach based on four surface conditions...
Martin, Petra; Biniecka, Monika; Ó'Meachair, Shane; Maguire, Aoife; Tosetto, Miriam; Nolan, Blathnaid; Hyland, John; Sheahan, Kieran; O'Donoghue, Diarmuid; Mulcahy, Hugh; Fennelly, David; O'Sullivan, Jacintha
2018-01-01
Despite treatment of patients with metastatic colorectal cancer (mCRC) with bevacizumab plus chemotherapy, response rates are modest and there are no biomarkers available that will predict response. The aim of this study was to assess if markers associated with three interconnected cancer-associated biological processes, specifically angiogenesis, inflammation and oxidative damage, could stratify the survival outcome of this cohort. Levels of angiogenesis, inflammation and oxidative damage markers were assessed in pre-bevacizumab resected tumour and serum samples of mCRC patients by dual immunofluorescence, immunohistochemistry and ELISA. This study identified that specific markers of angiogenesis, inflammation and oxidative damage stratify survival of patients on this anti-angiogenic treatment. Biomarkers of immature tumour vasculature (% IMM, p=0.026, n=80), high levels of oxidative damage in the tumour epithelium (intensity of 8-oxo-dG in nuclear and cytoplasmic compartments, p=0.042 and 0.038 respectively, n=75) and lower systemic pro-inflammatory cytokines (IL6 and IL8, p=0.053 and 0.049 respectively, n=61) significantly stratify with median overall survival (OS). In summary, screening for a panel of biomarkers for high levels of immature tumour vasculature, high levels of oxidative DNA damage and low levels of systemic pro-inflammatory cytokines may be beneficial in predicting enhanced survival outcome following bevacizumab treatment for mCRC. PMID:29535825
A Typology of Mixed Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
2014-03-01
Trees and woody vines are sampled in large plots with 9 m (30 ft) radii. Saplings, shrubs , and herbs are sampled in nested smaller plots with 2 m (5 ft... woody vines in 9 m (30 ft) radius plots and saplings, shrubs , and herbaceous species in 2 m (5 ft) radius plots. In herbaceous meadows, only the 2 m (5...suggests stratifying vegetation by growth forms of trees, shrubs , herbs, and vines and sampling plant communities by using nested circular plots
NASA Astrophysics Data System (ADS)
Lee, Kang Il
2012-08-01
The present study aims to provide insight into the relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21-0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.
Dorazio, R.M.; Rago, P.J.
1991-01-01
We simulated mark–recapture experiments to evaluate a method for estimating fishing mortality and migration rates of populations stratified at release and recovery. When fish released in two or more strata were recovered from different recapture strata in nearly the same proportions, conditional recapture probabilities were estimated outside the [0, 1] interval. The maximum likelihood estimates tended to be biased and imprecise when the patterns of recaptures produced extremely "flat" likelihood surfaces. Absence of bias was not guaranteed, however, in experiments where recapture rates could be estimated within the [0, 1] interval. Inadequate numbers of tag releases and recoveries also produced biased estimates, although the bias was easily detected by the high sampling variability of the estimates. A stratified tag–recapture experiment with sockeye salmon (Oncorhynchus nerka) was used to demonstrate procedures for analyzing data that produce biased estimates of recapture probabilities. An estimator was derived to examine the sensitivity of recapture rate estimates to assumed differences in natural and tagging mortality, tag loss, and incomplete reporting of tag recoveries.
A country-wide probability sample of public attitudes toward stuttering in Portugal.
Valente, Ana Rita S; St Louis, Kenneth O; Leahy, Margaret; Hall, Andreia; Jesus, Luis M T
2017-06-01
Negative public attitudes toward stuttering have been widely reported, although differences among countries and regions exist. Clear reasons for these differences remain obscure. Published research is unavailable on public attitudes toward stuttering in Portugal as well as a representative sample that explores stuttering attitudes in an entire country. This study sought to (a) determine the feasibility of a country-wide probability sampling scheme to measure public stuttering attitudes in Portugal using a standard instrument (the Public Opinion Survey of Human Attributes-Stuttering [POSHA-S]) and (b) identify demographic variables that predict Portuguese attitudes. The POSHA-S was translated to European Portuguese through a five-step process. Thereafter, a local administrative office-based, three-stage, cluster, probability sampling scheme was carried out to obtain 311 adult respondents who filled out the questionnaire. The Portuguese population held stuttering attitudes that were generally within the average range of those observed from numerous previous POSHA-S samples. Demographic variables that predicted more versus less positive stuttering attitudes were respondents' age, region of the country, years of school completed, working situation, and number of languages spoken. Non-predicting variables were respondents' sex, marital status, and parental status. A local administrative office-based, probability sampling scheme generated a respondent profile similar to census data and indicated that Portuguese attitudes are generally typical. Copyright © 2017 Elsevier Inc. All rights reserved.
Bubble-free on-chip continuous-flow polymerase chain reaction: concept and application.
Wu, Wenming; Kang, Kyung-Tae; Lee, Nae Yoon
2011-06-07
Bubble formation inside a microscale channel is a significant problem in general microfluidic experiments. The problem becomes especially crucial when performing a polymerase chain reaction (PCR) on a chip which is subject to repetitive temperature changes. In this paper, we propose a bubble-free sample injection scheme applicable for continuous-flow PCR inside a glass/PDMS hybrid microfluidic chip, and attempt to provide a theoretical basis concerning bubble formation and elimination. Highly viscous paraffin oil plugs are employed in both the anterior and posterior ends of a sample plug, completely encapsulating the sample and eliminating possible nucleation sites for bubbles. In this way, internal channel pressure is increased, and vaporization of the sample is prevented, suppressing bubble formation. Use of an oil plug in the posterior end of the sample plug aids in maintaining a stable flow of a sample at a constant rate inside a heated microchannel throughout the entire reaction, as compared to using an air plug. By adopting the proposed sample injection scheme, we demonstrate various practical applications. On-chip continuous-flow PCR is performed employing genomic DNA extracted from a clinical single hair root sample, and its D1S80 locus is successfully amplified. Also, chip reusability is assessed using a plasmid vector. A single chip is used up to 10 times repeatedly without being destroyed, maintaining almost equal intensities of the resulting amplicons after each run, ensuring the reliability and reproducibility of the proposed sample injection scheme. In addition, the use of a commercially-available and highly cost-effective hot plate as a potential candidate for the heating source is investigated.
Ginzburg, Irina; Silva, Goncalo; Talon, Laurent
2015-02-01
This work focuses on the numerical solution of the Stokes-Brinkman equation for a voxel-type porous-media grid, resolved by one to eight spacings per permeability contrast of 1 to 10 orders in magnitude. It is first analytically demonstrated that the lattice Boltzmann method (LBM) and the linear-finite-element method (FEM) both suffer from the viscosity correction induced by the linear variation of the resistance with the velocity. This numerical artefact may lead to an apparent negative viscosity in low-permeable blocks, inducing spurious velocity oscillations. The two-relaxation-times (TRT) LBM may control this effect thanks to free-tunable two-rates combination Λ. Moreover, the Brinkman-force-based BF-TRT schemes may maintain the nondimensional Darcy group and produce viscosity-independent permeability provided that the spatial distribution of Λ is fixed independently of the kinematic viscosity. Such a property is lost not only in the BF-BGK scheme but also by "partial bounce-back" TRT gray models, as shown in this work. Further, we propose a consistent and improved IBF-TRT model which vanishes viscosity correction via simple specific adjusting of the viscous-mode relaxation rate to local permeability value. This prevents the model from velocity fluctuations and, in parallel, improves for effective permeability measurements, from porous channel to multidimensions. The framework of our exact analysis employs a symbolic approach developed for both LBM and FEM in single and stratified, unconfined, and bounded channels. It shows that even with similar bulk discretization, BF, IBF, and FEM may manifest quite different velocity profiles on the coarse grids due to their intrinsic contrasts in the setting of interface continuity and no-slip conditions. While FEM enforces them on the grid vertexes, the LBM prescribes them implicitly. We derive effective LBM continuity conditions and show that the heterogeneous viscosity correction impacts them, a property also shared by FEM for shear stress. But, in contrast with FEM, effective velocity conditions in LBM give rise to slip velocity jumps which depend on (i) neighbor permeability values, (ii) resolution, and (iii) control parameter Λ, ranging its reliable values from Poiseuille bounce-back solution in open flow to zero in Darcy's limit. We suggest an "upscaling" algorithm for Λ, from multilayers to multidimensions in random extremely dispersive samples. Finally, on the positive side for LBM besides its overall versatility, the implicit boundary layers allow for smooth accommodation of the flat discontinuous Darcy profiles, quite deficient in FEM.
Miller, Todd S.
2015-11-20
During 2007–10, groundwater samples were collected from 13 wells including 7 wells that are completed in the confined sand and gravel aquifers, 1 well that is completed in the unconfined aquifer, and 5 wells that are completed in the bedrock aquifers. Calcium dominates the cation composition and bicarbonate dominates the anion composition in most groundwater. Water quality in the study area generally meets state and Federal drinking-water standards but concentrations of some constituents exceeded the standards. The standards that were exceeded include sodium (3 samples), dissolved solids (1 sample), iron (3 samples), manganese (8 samples), and arsenic (1 sample).
Digital Baseband Architecture For Transponder
NASA Technical Reports Server (NTRS)
Nguyen, Tien M.; Yeh, Hen-Geul
1995-01-01
Proposed advanced transponder for long-distance radio communication system with turnaround ranging contains carrier-signal-tracking loop including baseband digital "front end." For reduced cost, transponder includes analog intermediate-frequency (IF) section and analog automatic gain control (AGC) loop at first of two IF mixers. However, second IF mixer redesigned to ease digitization of baseband functions. To conserve power and provide for simpler and smaller transponder hardware, baseband digital signal-processing circuits designed to implement undersampling scheme. Furthermore, sampling scheme and sampling frequency chosen so redesign involves minimum modification of command-detector unit (CDU).
Kindergarten Teachers' Experience with Reporting Child Abuse in Taiwan
ERIC Educational Resources Information Center
Feng, Jui-Ying; Huang, Tzu-Yi; Wang, Chi-Jen
2010-01-01
Objective: The objectives were to examine factors associated with reporting child abuse among kindergarten teachers in Taiwan based on the Theory of Planned Behavior (TPB). Method: A stratified quota sampling technique was used to randomly select kindergarten teachers in Taiwan. The Child Abuse Intention Report Scale, which includes demographics,…
A Dexterous Optional Randomized Response Model
ERIC Educational Resources Information Center
Tarray, Tanveer A.; Singh, Housila P.; Yan, Zaizai
2017-01-01
This article addresses the problem of estimating the proportion Pi[subscript S] of the population belonging to a sensitive group using optional randomized response technique in stratified sampling based on Mangat model that has proportional and Neyman allocation and larger gain in efficiency. Numerically, it is found that the suggested model is…
Influence of Achievement Motivation on Nigerian Undergraduates' Attitude towards Examination
ERIC Educational Resources Information Center
Adegboyega, Lateef Omotosho
2018-01-01
This paper investigated the influence of achievement motivation on Nigerian undergraduates' attitude towards examination. Descriptive survey of the correlational type was employed for the study. One thousand, five hundred and thirty-six (1,536) undergraduates in Nigeria were drawn using purposive and stratified sampling techniques. Four research…
Psychological Distress and Related Factors in Female College Students
ERIC Educational Resources Information Center
Vazquez, Fernando L.; Otero, Patricia; Diaz, Olga
2012-01-01
Objective: This study assessed the psychological distress in Spanish college women and analyzed it in relation to sociodemographic and academic factors. Participants and Methods: The authors selected a stratified random sampling of 1,043 college women (average age of 22.2 years). Sociodemographic and academic information were collected, and…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-23
... the extension for three years, with revision, of the following report: Report title: Domestic Finance.... Reporters: Domestic finance companies and mortgage companies. Estimated annual reporting hours: 350 hours... calendar day of the month from a stratified sample of finance companies. Each monthly report collects...
Affluence and Equality in Nonmetropolitan American, 1950-1970.
ERIC Educational Resources Information Center
Beck, E. M.; Bianchi, S.
Utilizing census data on a stratified sample of 276 U.S. countries, the relationships between economic development and the levels of affluence and inequality in rural America (1950-70) were investigated via development of a macro-level affluence/inequality model. Variables examined were: demographic and social characteristics; income…
Physical Activity among Older People Living Alone in Shanghai, China
ERIC Educational Resources Information Center
Chen, Yu; While, Alison E; Hicks, Allan
2015-01-01
Objective: To investigate physical activity among older people living alone in Shanghai, People's Republic of China, and key factors contributing to their physical activity. Methods: A cross-sectional questionnaire survey was administered in nine communities in Shanghai, using a stratified random cluster sample: 521 community-dwelling older people…
The Relationship between Affective and Social Isolation among Undergraduate Students
ERIC Educational Resources Information Center
Alghraibeh, Ahmad M.; Juieed, Noof M. Bni
2018-01-01
We examined the correlation between social isolation and affective isolation among 457 undergraduate students using a stratified cluster sampling technique. Participants comprised 221 men and 236 women, all of whom were either first- or fourth-year students enrolled in various majors at King Saud University. Means, standard deviations, Pearson…
Trends in Financial Indicators of Colleges and Universities.
ERIC Educational Resources Information Center
Gomberg, Irene L.; Atelsek, Frank J.
A survey conducted by the Higher Education Panel sought trends in various items of information about the financial condition of colleges and universities. A stratified sample of 760 insitutions was used, excluding major research universities. Information was requested on basic finance data, dormitory occupancy rates, occurrence of institutional…
Determinants of Teachers' Attitudes towards E- Learning in Tanzanian Higher Learning Institutions
ERIC Educational Resources Information Center
Kisanga, Dalton H.
2016-01-01
This survey research study presents the findings on determinants of teachers' attitudes towards e-learning in Tanzanian higher learning institutions. The study involved 258 teachers from 4 higher learning institutions obtained through stratified, simple random sampling. Questionnaires and documentary review were used in data collection. Data were…
Empirically Exploring Higher Education Cultures of Assessment
ERIC Educational Resources Information Center
Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.
2016-01-01
Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…
Drug and Alcohol Use by Canadian University Athletes: A National Survey.
ERIC Educational Resources Information Center
Spence, John C.; Gauvin, Lise
1996-01-01
Using a stratified random sampling procedure, 754 student athletes were surveyed regarding drug and alcohol use in eight different sports from eight universities across Canada. Provides statistics of substances athletes reported using, including pain medications, weight loss products, anabolic steroids, smokeless tobacco products, alcohol,…
Predictors of Eligibility for ESY. Final Report.
ERIC Educational Resources Information Center
Browder, Diane M.; And Others
Evaluation of eligibility for extended school year (ESY) services was made based on informaton contained in school files in a stratified sampling across Pennsylvania. Subjects had been classified as severely and profoundly mentally retarded and were divided into groups based on eligibility for programming in excess of 180 days or ineligibility for…
Women in University Management: The Nigerian Experience
ERIC Educational Resources Information Center
Abiodun-Oyebanji, Olayemi; Olaleye, F.
2011-01-01
This study examined women in university management in Nigeria. It was a descriptive research of the survey type. The population of the study comprised all the public universities in southwest Nigeria, out of which three were selected through the stratified random sampling technique. Three hundred respondents who were in management positions were…
DOT National Transportation Integrated Search
1977-03-01
The objectives of the study were to collect and analyze data on rural pedestrian accidents and to identify potential countermeasures. Data on a stratified random sample of over 1,500 rural and suburban accidents from six states was collected during i...
ERIC Educational Resources Information Center
Volkman, Julie E.; Hillemeier, Marianne M.
2008-01-01
This study examined school nurses' communication with community physicians and its relationship to school nurse satisfaction with school health services. A stratified random sample of school nurses in Pennsylvania (N = 615) were surveyed about communication effectiveness with community physicians, satisfaction with school health services for…
Water-Oriented Recreational Demand and Projections: Calculations for Western Lake Superior.
1978-06-15
Documented boats-every 36th registration. The stratified sample gives maximum descrimination to boats over 20 feet in length. These are the boats that are...large boats. Because documented boats, by their nature, are rarely trailered, it was believed that less descrimination was required. Therefore, a smaller
The Contours of Tracking in North Carolina
ERIC Educational Resources Information Center
Kelly, Sean
2007-01-01
In this analysis of North Carolina high schools the author examines school tracking policies using an amended version of Sorensen's (1970) conceptualization of the organizational dimensions of tracking. Data from curriculum guides in a stratified sample of 92 high schools reveal both consistency and variation in how tracking is implemented at the…
Training Neighborhood Residents to Conduct a Survey
ERIC Educational Resources Information Center
Back, Susan Malone; Tseng, Wan-Chun; Li, Jiaqi; Wang, Yuanhua; Phan, Van Thanh; Yeter, Ibrahim Halil
2015-01-01
As a requirement for a federal neighborhood revitalization grant, the authors trained resident interviewers and coordinated the conduct of more than 1000 door-to-door interviews of a stratified random sample. The targeted area was a multiethnic, lower income neighborhood that continues to experience the effects of past segregation. Monitoring and…
APPLICATION OF A MULTIPURPOSE UNEQUAL-PROBABILITY STREAM SURVEY IN THE MID-ATLANTIC COASTAL PLAIN
A stratified random sample with unequal-probability selection was used to design a multipurpose survey of headwater streams in the Mid-Atlantic Coastal Plain. Objectives for data from the survey include unbiased estimates of regional stream conditions, and adequate coverage of un...
Change in Sense of Community: An Empirical Finding
ERIC Educational Resources Information Center
Loomis, Colleen; Dockett, Kathleen H.; Brodsky, Anne E.
2004-01-01
This study investigated changes in students' psychological sense of community (SOC) under two conditions of external threat against their urban, historically Black, public nonresidential university in a U.S. mid-Atlantic city. Two independent stratified random samples (N = 801 and N = 241) consisting of undergraduate and graduate women (61%) and…
We assessed the extent and characteristics of geographically isolated wetlands (i.e., wetlands completely surrounded by upland) in a series of drainage basins in the urban northeast U.S. We employed a random sampling design that stratifies study sites according to their degree o...
Psychological Security-Insecurity of Illinois Central College Students.
ERIC Educational Resources Information Center
Grout, David R.
This study attempted to discover the distribution of feelings of security and insecurity in the population of Illinois Central College (ICC) and whether significant differences exist among various subgroups. A 10 per cent stratified random sample of students were administered Maslow's Security-Insecurity Inventory. No significant difference was…
Homophobia in Registered Nurses: Impact on LGB Youth
ERIC Educational Resources Information Center
Blackwell, Christopher W.; Kiehl, Ermalynn M.
2008-01-01
This study examined registered nurses' overall attitudes and homophobia towards gays and lesbians in the workplace. Homophobia scores, represented by the Attitudes Toward Lesbians and Gay Men (ATLG) Scale, was the dependent variable. Overall homophobia scores were assessed among a randomized stratified sample of registered nurses licensed in the…
The Analysis of Iranian Students' Persistence in Online Education
ERIC Educational Resources Information Center
Mahmodi, Mahdi; Ebrahimzade, Issa
2015-01-01
In the following research, the relationship between instructional interaction and student persistence in e-learning has been analyzed. In order to conduct a descriptive-analytic survey, 744 undergraduate e-students were selected by stratified random sampling method to examine not only the frequency and the methods of establishing an instructional…
DOT National Transportation Integrated Search
1977-06-01
The objectives of the study were to collect and analyze data on rural pedestrian accidents and to identify potential countermeasures. Data on a stratified random sample of over 1,500 rural and suburban accidents from six states was collected during i...
Ecological indicators must be shown to be responsive to stress. For large-scale observational studies the best way to demonstrate responsiveness is by evaluating indicators along a gradient of stress, but such gradients are often unknown for a population of sites prior to site se...
Do Social Workers Make Better Child Welfare Workers than Non-Social Workers?
ERIC Educational Resources Information Center
Perry, Robin E.
2006-01-01
Objective: To empirically examine whether the educational background of child welfare workers in Florida impacts on performance evaluations of their work. Method: A proportionate, stratified random sample of supervisor and peer evaluations of child protective investigators and child protective service workers is conducted. ANOVA procedures are…
Perceptions of Professionalism among Individuals in the Child Care Field
ERIC Educational Resources Information Center
Martin, Sue; Meyer, James; Jones, Robin Caudle; Nelson, Laverne; Ting, Ling
2010-01-01
Individuals working with young children, birth through age five, continue to strive for professional recognition. Factors that contribute to a person's feelings about being a child care professional were investigated. Stratified random sampling was used for data collection. Participants in the study responded to mailed questionnaires concerning a…
Assessing Principals' Quality Assurance Strategies in Osun State Secondary Schools, Nigeria
ERIC Educational Resources Information Center
Fasasi, Yunus Adebunmi; Oyeniran, Saheed
2014-01-01
This paper examined principals' quality assurance strategies in secondary schools in Osun State, Nigeria. The study adopted a descriptive survey research design. Stratified random sampling technique was used to select 10 male and 10 female principals, and 190 male and190 female teachers. "Secondary School Principal Quality Assurance…
Factors Associated with the Fulfillment of Residential Preferences.
ERIC Educational Resources Information Center
Hwang, Sean-Shong; Albrecht, Don E.
A 1983 survey of Texas homebuyers reveals a high degree of mismatch between the preferred and actual residence of homebuyers. Such mismatch is examined using social/psychological, life-cycle, racial, socioeconomic, and occupational factors as possible explanations. Questionnaires mailed to a stratified random sample of 960 homebuyers across 12…
Curriculum Review Evaluation on Entrepreneurial Education in Cross River State Higher Institutions
ERIC Educational Resources Information Center
Ambekeh, Udida Lucy
2013-01-01
This study investigated curriculum organization and delivery towards functional entrepreneurial education transformation of students in Higher Institutions in Cross River State -- Nigeria. To guide the conduct of this study, two research questions and one hypothesis were formulated. Proportionate stratified sampling technique was used in the…
Interpersonal Features and Functions of Nonsuicidal Self-Injury
ERIC Educational Resources Information Center
Muehlenkamp, Jennifer; Brausch, Amy; Quigley, Katherine; Whitlock, Janis
2013-01-01
Etiological models of nonsuicidal self-injury (NSSI) suggest interpersonal features may be important to understand this behavior, but social functions and correlates have not been extensively studied. This study addresses existing limitations by examining interpersonal correlates and functions of NSSI within a stratified random sample of 1,243…
Lake Superior Phytoplankton Characterization from the 2006 Probability Based Survey
We conducted a late summer probability based survey of Lake Superior in 2006 which consisted of 52 sites stratified across 3 depth zones. As part of this effort, we collected composite phytoplankton samples from the epilimnion and the fluorescence maxima (Fmax) at 29 of the site...
Teachers' Characteristics: Understanding the Decision to Refer for Special Education Placement
ERIC Educational Resources Information Center
Hauck, Deborah Z.
2010-01-01
This mixed method study examined elementary teachers' characteristics (efficacy, tolerance, and demographics) and their influences on the decision to refer African American students to special education. A stratified purposeful sample of 115 elementary teachers for the quantitative segment and a subsample of 13 teachers for the qualitative portion…
Factors Associated with Successful Functioning in American Indian Youths
ERIC Educational Resources Information Center
Silmere, Hile; Stiffman, Arlene Rubin
2006-01-01
This study examines environmental and cultural factors related to successful functioning in a stratified random sample of 401 American Indian youths. The success index included seven indicators: good mental health, being alcohol and drug free, absence of serious misbehavior, clean police record, good grades, positive psychosocial functioning, and…
Positive Reading Attitudes of Low-Income Bilingual Latinos
ERIC Educational Resources Information Center
Bussert-webb, Kathy M.; Zhang, Zhidong
2018-01-01
Many assume low-income, emergent bilingual Latinos have poor reading attitudes. To investigate this issue, we surveyed 1,503 Texas public high school students through stratified cluster sampling to determine their reading attitudes. Most represented Latinos and mixed-race Latinos/Whites who heard Spanish at home and whose mother tongue was…
Job Insecurity and Employee Well-Being.
ERIC Educational Resources Information Center
Vance, Robert J.; Kuhnert, Karl W.
This study explored the consequences of perceived job security and insecurity on the psychological and physical health of employees. Data were gathered from employees of a large midwestern manufacturing organization that produced products for material removal applications. Surveys were sent through company mail to a stratified random sample of 442…
Development of the Brief Romantic Relationship Interaction Coding Scheme (BRRICS)
Humbad, Mikhila N.; Donnellan, M. Brent; Klump, Kelly L.; Burt, S. Alexandra
2012-01-01
Although observational studies of romantic relationships are common, many existing coding schemes require considerable amounts of time and resources to implement. The current study presents a new coding scheme, the Brief Romantic Relationship Interaction Coding Scheme (BRRICS), designed to assess various aspects of romantic relationship both quickly and efficiently. The BRRICS consists of four individual coding dimensions assessing positive and negative affect in each member of the dyad, as well as four codes assessing specific components of the dyadic interaction (i.e., positive reciprocity, demand-withdraw pattern, negative reciprocity, and overall satisfaction). Concurrent associations with measures of marital adjustment and conflict were evaluated in a sample of 118 married couples participating in the Michigan State University Twin Registry. Couples were asked to discuss common conflicts in their marriage while being videotaped. Undergraduate coders used the BRRICS to rate these interactions. The BRRICS scales were correlated in expected directions with self-reports of marital adjustment, as well as children’s perception of the severity and frequency of marital conflict. Based on these results, the BRRICS may be an efficient tool for researchers with large samples of observational data who are interested in coding global aspects of the relationship but do not have the resources to use labor intensive schemes. PMID:21875192
Fish tracking by combining motion based segmentation and particle filtering
NASA Astrophysics Data System (ADS)
Bichot, E.; Mascarilla, L.; Courtellemont, P.
2006-01-01
In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.
A study of malware detection on smart mobile devices
NASA Astrophysics Data System (ADS)
Yu, Wei; Zhang, Hanlin; Xu, Guobin
2013-05-01
The growing in use of smart mobile devices for everyday applications has stimulated the spread of mobile malware, especially on popular mobile platforms. As a consequence, malware detection becomes ever more critical in sustaining the mobile market and providing a better user experience. In this paper, we review the existing malware and detection schemes. Using real-world malware samples with known signatures, we evaluate four popular commercial anti-virus tools and our data shows that these tools can achieve high detection accuracy. To deal with the new malware with unknown signatures, we study the anomaly based detection using decision tree algorithm. We evaluate the effectiveness of our detection scheme using malware and legitimate software samples. Our data shows that the detection scheme using decision tree can achieve a detection rate up to 90% and a false positive rate as low as 10%.
Sensing cocaine in saliva with infrared laser spectroscopy
NASA Astrophysics Data System (ADS)
Hans, Kerstin M.-C.; Müller, Matthias; Gianella, Michele; Wägli, Ph.; Sigrist, Markus W.
2013-02-01
Increasing numbers of accidents caused by drivers under the influence of drugs, raise drug tests to worldwide interest. We developed a one-step extraction technique for cocaine in saliva and analyzed reference samples with laser spectroscopy employing two different schemes. The first is based on attenuated total reflection (ATR), which is applied to dried samples. The second scheme uses transmission measurements for the analysis of liquid samples. ATR spectroscopy achieved a limit of detection (LOD) of 3μg/ml. The LOD for the transmission approach in liquid samples is < 10 μg/ml. These LODs are realistic as such concentration ranges are encountered in the saliva of drug users after the administration of a single dose of cocaine. An improved stabilization of the set-up should lower the limit of detection significantly.
Introduction to the Apollo collections: Part 2: Lunar breccias
NASA Technical Reports Server (NTRS)
Mcgee, P. E.; Simonds, C. H.; Warner, J. L.; Phinney, W. C.
1979-01-01
Basic petrographic, chemical and age data for a representative suite of lunar breccias are presented for students and potential lunar sample investigators. Emphasis is on sample description and data presentation. Samples are listed, together with a classification scheme based on matrix texture and mineralogy and the nature and abundance of glass present both in the matrix and as clasts. A calculus of the classification scheme, describes the characteristic features of each of the breccia groups. The cratering process which describes the sequence of events immediately following an impact event is discussed, especially the thermal and material transport processes affecting the two major components of lunar breccias (clastic debris and fused material).
Salem, Shady; Chang, Sam S; Clark, Peter E; Davis, Rodney; Herrell, S Duke; Kordan, Yakup; Wills, Marcia L; Shappell, Scott B; Baumgartner, Roxelyn; Phillips, Sharon; Smith, Joseph A; Cookson, Michael S; Barocas, Daniel A
2010-10-01
Whole mount processing is more resource intensive than routine systematic sampling of radical retropubic prostatectomy specimens. We compared whole mount and systematic sampling for detecting pathological outcomes, and compared the prognostic value of pathological findings across pathological methods. We included men (608 whole mount and 525 systematic sampling samples) with no prior treatment who underwent radical retropubic prostatectomy at Vanderbilt University Medical Center between January 2000 and June 2008. We used univariate and multivariate analysis to compare the pathological outcome detection rate between pathological methods. Kaplan-Meier curves and the log rank test were used to compare the prognostic value of pathological findings across pathological methods. There were no significant differences between the whole mount and the systematic sampling groups in detecting extraprostatic extension (25% vs 30%), positive surgical margins (31% vs 31%), pathological Gleason score less than 7 (49% vs 43%), 7 (39% vs 43%) or greater than 7 (12% vs 13%), seminal vesicle invasion (8% vs 10%) or lymph node involvement (3% vs 5%). Tumor volume was higher in the systematic sampling group and whole mount detected more multiple surgical margins (each p <0.01). There were no significant differences in the likelihood of biochemical recurrence between the pathological methods when patients were stratified by pathological outcome. Except for estimated tumor volume and multiple margins whole mount and systematic sampling yield similar pathological information. Each method stratifies patients into comparable risk groups for biochemical recurrence. Thus, while whole mount is more resource intensive, it does not appear to result in improved detection of clinically important pathological outcomes or prognostication. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
A multitemporal (1979-2009) land-use/land-cover dataset of the binational Santa Cruz Watershed
2011-01-01
Trends derived from multitemporal land-cover data can be used to make informed land management decisions and to help managers model future change scenarios. We developed a multitemporal land-use/land-cover dataset for the binational Santa Cruz watershed of southern Arizona, United States, and northern Sonora, Mexico by creating a series of land-cover maps at decadal intervals (1979, 1989, 1999, and 2009) using Landsat Multispectral Scanner and Thematic Mapper data and a classification and regression tree classifier. The classification model exploited phenological changes of different land-cover spectral signatures through the use of biseasonal imagery collected during the (dry) early summer and (wet) late summer following rains from the North American monsoon. Landsat images were corrected to remove atmospheric influences, and the data were converted from raw digital numbers to surface reflectance values. The 14-class land-cover classification scheme is based on the 2001 National Land Cover Database with a focus on "Developed" land-use classes and riverine "Forest" and "Wetlands" cover classes required for specific watershed models. The classification procedure included the creation of several image-derived and topographic variables, including digital elevation model derivatives, image variance, and multitemporal Kauth-Thomas transformations. The accuracy of the land-cover maps was assessed using a random-stratified sampling design, reference aerial photography, and digital imagery. This showed high accuracy results, with kappa values (the statistical measure of agreement between map and reference data) ranging from 0.80 to 0.85.
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Broberg, Craig S; Mitchell, Julie; Rehel, Silven; Grant, Andrew; Gianola, Ann; Beninato, Peter; Winter, Christiane; Verstappen, Amy; Valente, Anne Marie; Weiss, Joseph; Zaidi, Ali; Earing, Michael G; Cook, Stephen; Daniels, Curt; Webb, Gary; Khairy, Paul; Marelli, Ariane; Gurvitz, Michelle Z; Sahn, David J
2015-10-01
The adoption of electronic health records (EHR) has created an opportunity for multicenter data collection, yet the feasibility and reliability of this methodology is unknown. The aim of this study was to integrate EHR data into a homogeneous central repository specifically addressing the field of adult congenital heart disease (ACHD). Target data variables were proposed and prioritized by consensus of investigators at five target ACHD programs. Database analysts determined which variables were available within their institutions' EHR and stratified their accessibility, and results were compared between centers. Data for patients seen in a single calendar year were extracted to a uniform database and subsequently consolidated. From 415 proposed target variables, only 28 were available in discrete formats at all centers. For variables of highest priority, 16/28 (57%) were available at all four sites, but only 11% for those of high priority. Integration was neither simple nor straightforward. Coding schemes in use for congenital heart diagnoses varied and would require additional user input for accurate mapping. There was considerable variability in procedure reporting formats and medication schemes, often with center-specific modifications. Despite the challenges, the final acquisition included limited data on 2161 patients, and allowed for population analysis of race/ethnicity, defect complexity, and body morphometrics. Large-scale multicenter automated data acquisition from EHRs is feasible yet challenging. Obstacles stem from variability in data formats, coding schemes, and adoption of non-standard lists within each EHR. The success of large-scale multicenter ACHD research will require institution-specific data integration efforts. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Understanding the types of fraud in claims to South African medical schemes.
Legotlo, T G; Mutezo, A
2018-03-28
Medical schemes play a significant role in funding private healthcare in South Africa (SA). However, the sector is negatively affected by the high rate of fraudulent claims. To identify the types of fraudulent activities committed in SA medical scheme claims. A cross-sectional qualitative study was conducted, adopting a case study strategy. A sample of 15 employees was purposively selected from a single medical scheme administration company in SA. Semi-structured interviews were conducted to collect data from study participants. A thematic analysis of the data was done using ATLAS.ti software (ATLAS.ti Scientific Software Development, Germany). The study population comprised the 17 companies that administer medical schemes in SA. Data were collected from 15 study participants, who were selected from the medical scheme administrator chosen as a case study. The study found that medical schemes were defrauded in numerous ways. The perpetrators of this type of fraud include healthcare service providers, medical scheme members, employees, brokers and syndicates. Medical schemes are mostly defrauded by the submission of false claims by service providers and syndicates. Fraud committed by medical scheme members encompasses the sharing of medical scheme benefits with non-members (card farming) and non-disclosure of pre-existing conditions at the application stage. The study concluded that perpetrators of fraud have found several ways of defrauding SA medical schemes regarding claims. Understanding and identifying the types of fraud events facing medical schemes is the initial step towards establishing methods to mitigate this risk. Future studies should examine strategies to manage fraudulent medical scheme claims.
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sampling scheme and the guidance document are available on EPA's PCB Web site at http://www.epa.gov/pcb, or... § 761.125(c) (2) through (4). Using its best engineering judgment, EPA may sample a statistically valid random or grid sampling technique, or both. When using engineering judgment or random “grab” samples, EPA...
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sampling scheme and the guidance document are available on EPA's PCB Web site at http://www.epa.gov/pcb, or... § 761.125(c) (2) through (4). Using its best engineering judgment, EPA may sample a statistically valid random or grid sampling technique, or both. When using engineering judgment or random “grab” samples, EPA...
Estimation of Variance in the Case of Complex Samples.
ERIC Educational Resources Information Center
Groenewald, A. C.; Stoker, D. J.
In a complex sampling scheme it is desirable to select the primary sampling units (PSUs) without replacement to prevent duplications in the sample. Since the estimation of the sampling variances is more complicated when the PSUs are selected without replacement, L. Kish (1965) recommends that the variance be calculated using the formulas…
Active animal health surveillance in European Union Member States: gaps and opportunities.
Bisdorff, B; Schauer, B; Taylor, N; Rodríguez-Prieto, V; Comin, A; Brouwer, A; Dórea, F; Drewe, J; Hoinville, L; Lindberg, A; Martinez Avilés, M; Martínez-López, B; Peyre, M; Pinto Ferreira, J; Rushton, J; VAN Schaik, G; Stärk, K D C; Staubach, C; Vicente-Rubiano, M; Witteveen, G; Pfeiffer, D; Häsler, B
2017-03-01
Animal health surveillance enables the detection and control of animal diseases including zoonoses. Under the EU-FP7 project RISKSUR, a survey was conducted in 11 EU Member States and Switzerland to describe active surveillance components in 2011 managed by the public or private sector and identify gaps and opportunities. Information was collected about hazard, target population, geographical focus, legal obligation, management, surveillance design, risk-based sampling, and multi-hazard surveillance. Two countries were excluded due to incompleteness of data. Most of the 664 components targeted cattle (26·7%), pigs (17·5%) or poultry (16·0%). The most common surveillance objectives were demonstrating freedom from disease (43·8%) and case detection (26·8%). Over half of components applied risk-based sampling (57·1%), but mainly focused on a single population stratum (targeted risk-based) rather than differentiating between risk levels of different strata (stratified risk-based). About a third of components were multi-hazard (37·3%). Both risk-based sampling and multi-hazard surveillance were used more frequently in privately funded components. The study identified several gaps (e.g. lack of systematic documentation, inconsistent application of terminology) and opportunities (e.g. stratified risk-based sampling). The greater flexibility provided by the new EU Animal Health Law means that systematic evaluation of surveillance alternatives will be required to optimize cost-effectiveness.
Wu, Qunhong; Hao, Yanhua; Ning, Ning; Xu, Ling; Liu, Chaojie; Li, Ye; Kang, Zheng; Liu, Guoxiang
2014-01-01
Background People with chronic non-communicable diseases (NCD) are particularly vulnerable to socioeconomic inequality due to their long-term expensive health needs. This study aimed to assess socioeconomic-related inequality in health service utilization among NCD patients in China and to analyze factors associated with this disparity. Methods Data were taken from the 2008 Chinese National Health Survey, in which a multiple stage stratified random sampling method was employed to survey 56,456 households. We analyzed the distribution of actual use, need-expected use, and need-standardized usage of outpatient services (over a two-week period) and inpatient services (over one-year) across different income groups in 27,233 adult respondents who reported as having a NCD. We used a concentration index to measure inequality in the distribution of health services, which was expressed as HI (Horizontal Inequity Index) for need-standardized use of services. A non-linear probit regression model was employed to detect inequality across socio-economic groups. Results Pro-rich inequity in health services among NCD patients was more substantial than the average population. A higher degree of pro-rich inequity (HI = 0.253) was found in inpatient services compared to outpatient services (HI = 0.089). Despite a greater need for health services amongst those of lower socio-economic status, their actual use is much less than their more affluent counterparts. Health service underuse by the poor and overuse by the affluent are evident. Household income disparity was the greatest inequality factor in NCD service use for both outpatients (71.3%) and inpatients (108%), more so than health insurance policies. Some medical insurance schemes, such as the MIUE, actually made a pro-rich contribution to health service inequality (16.1% for outpatient and 12.1% for inpatient). Conclusions Inequality in health services amongst NCD patients in China remains largely determined by patient financial capability. The current insurance schemes are insufficient to address this inequity. A comprehensive social policy that encompasses a more progressive taxation package and redistribution of social capital as well as pro-poor welfare is needed. PMID:24960168
NASA Astrophysics Data System (ADS)
Wang, J.; Sulla-menashe, D. J.; Woodcock, C. E.; Sonnentag, O.; Friedl, M. A.
2017-12-01
Rapid climate change in arctic and boreal ecosystems is driving changes to land cover composition, including woody expansion in the arctic tundra, successional shifts following boreal fires, and thaw-induced wetland expansion and forest collapse along the southern limit of permafrost. The impacts of these land cover transformations on the physical climate and the carbon cycle are increasingly well-documented from field and model studies, but there have been few attempts to empirically estimate rates of land cover change at decadal time scale and continental spatial scale. Previous studies have used too coarse spatial resolution or have been too limited in temporal range to enable broad multi-decadal assessment of land cover change. As part of NASA's Arctic Boreal Vulnerability Experiment (ABoVE), we are using dense time series of Landsat remote sensing data to map disturbances and classify land cover types across the ABoVE extended domain (spanning western Canada and Alaska) over the last three decades (1982-2014) at 30 m resolution. We utilize regionally-complete and repeated acquisition high-resolution (<2 m) DigitalGlobe imagery to generate training data from across the region that follows a nested, hierarchical classification scheme encompassing plant functional type and cover density, understory type, wetland status, and land use. Additionally, we crosswalk plot-level field data into our scheme for additional high quality training sites. We use the Continuous Change Detection and Classification algorithm to estimate land cover change dates and temporal-spectral features in the Landsat data. These features are used to train random forest classification models and map land cover and analyze land cover change processes, focusing primarily on tundra "shrubification", post-fire succession, and boreal wetland expansion. We will analyze the high resolution data based on stratified random sampling of our change maps to validate and assess the accuracy of our model predictions. In this paper, we present initial results from this effort, including sub-regional analyses focused on several key areas, such as the Taiga Plains and the Southern Arctic ecozones, to calibrate our random forest models and assess results.
Xie, Xin; Wu, Qunhong; Hao, Yanhua; Yin, Hui; Fu, Wenqi; Ning, Ning; Xu, Ling; Liu, Chaojie; Li, Ye; Kang, Zheng; He, Changzhi; Liu, Guoxiang
2014-01-01
People with chronic non-communicable diseases (NCD) are particularly vulnerable to socioeconomic inequality due to their long-term expensive health needs. This study aimed to assess socioeconomic-related inequality in health service utilization among NCD patients in China and to analyze factors associated with this disparity. Data were taken from the 2008 Chinese National Health Survey, in which a multiple stage stratified random sampling method was employed to survey 56,456 households. We analyzed the distribution of actual use, need-expected use, and need-standardized usage of outpatient services (over a two-week period) and inpatient services (over one-year) across different income groups in 27,233 adult respondents who reported as having a NCD. We used a concentration index to measure inequality in the distribution of health services, which was expressed as HI (Horizontal Inequity Index) for need-standardized use of services. A non-linear probit regression model was employed to detect inequality across socio-economic groups. Pro-rich inequity in health services among NCD patients was more substantial than the average population. A higher degree of pro-rich inequity (HI = 0.253) was found in inpatient services compared to outpatient services (HI = 0.089). Despite a greater need for health services amongst those of lower socio-economic status, their actual use is much less than their more affluent counterparts. Health service underuse by the poor and overuse by the affluent are evident. Household income disparity was the greatest inequality factor in NCD service use for both outpatients (71.3%) and inpatients (108%), more so than health insurance policies. Some medical insurance schemes, such as the MIUE, actually made a pro-rich contribution to health service inequality (16.1% for outpatient and 12.1% for inpatient). Inequality in health services amongst NCD patients in China remains largely determined by patient financial capability. The current insurance schemes are insufficient to address this inequity. A comprehensive social policy that encompasses a more progressive taxation package and redistribution of social capital as well as pro-poor welfare is needed.
NASA Astrophysics Data System (ADS)
Prasetyo, E.; Ekowati, T.; Roessali, W.; Gayatri, S.
2018-02-01
The aims of study were: (i) identify of beef cattle fattening credit scheme, (ii) calculating and analyze of beef cattle farmers’ income, (iii) analyze of factors influencing beef cattle credit scheme towards farmer’s income. The research was held in five regencies in Central Java Province. Beef cattle fattening farm was standardized as an elementary unit. Survey method was used, while Two Stage Cluster Purposive Sampling was used for determining of sample. Data were analyzed using statistical method of quantitative descriptive and inferential statistics in term of income analysis and multiple linear regression models. The result showed that farmers used their own capital to run the farm. The average amount was IDR 10,769,871. Kredit Ketahanan Pangan dan Energi was credit scheme which was dominantly access by farmers. The average credit was IDR 23,312,200/farmer with rate of credit equal to 6.46%, the time of credit returning equal to 24.60 monthand the prediction of average collateral equal to IDR 35,800,00. The average of farmers’ income was IDR 4,361,611.60/2.96 head of beef cattle/fattening period. If the labour cost did not calculate as a cost production, hence the farmer’ income was IDR 7,608,630.41 or in other word the farmer’ income increase 74.44%. Factors of credit scheme which partially significant influence to the farmers’ income were number of own capital usage and value of credit collateral. Meanwhile, name of credit scheme, financing institution as a creditor, amount of credit, rate of credit scheme and time of returning credit were not significantly influence towards farmers’ income.
X-ray simulations method for the large field of view
NASA Astrophysics Data System (ADS)
Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.
2018-03-01
In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Bellon, Ellen; Ligtenberg, Marjolijn J L; Tejpar, Sabine; Cox, Karen; de Hertogh, Gert; de Stricker, Karin; Edsjö, Anders; Gorgoulis, Vassilis; Höfler, Gerald; Jung, Andreas; Kotsinas, Athanassios; Laurent-Puig, Pierre; López-Ríos, Fernando; Hansen, Tine Plato; Rouleau, Etienne; Vandenberghe, Peter; van Krieken, Johan J M; Dequeker, Elisabeth
2011-01-01
The use of epidermal growth factor receptor-targeting antibodies in metastatic colorectal cancer has been restricted to patients with wild-type KRAS tumors by the European Medicines Agency since 2008, based on data showing a lack of efficacy and potential harm in patients with mutant KRAS tumors. In an effort to ensure optimal, uniform, and reliable community-based KRAS testing throughout Europe, a KRAS external quality assessment (EQA) scheme was set up. The first large assessment round included 59 laboratories from eight different European countries. For each country, one regional scheme organizer prepared and distributed the samples for the participants of their own country. The samples included unstained sections of 10 invasive colorectal carcinomas with known KRAS mutation status. The samples were centrally validated by one of two reference laboratories. The laboratories were allowed to use their own preferred method for histological evaluation, DNA isolation, and mutation analysis. In this study, we analyze the setup of the KRAS scheme. We analyzed the advantages and disadvantages of the regional scheme organization by analyzing the outcome of genotyping results, analysis of tumor percentage, and written reports. We conclude that only 70% of laboratories correctly identified the KRAS mutational status in all samples. Both the false-positive and false-negative results observed negatively affect patient care. Reports of the KRAS test results often lacked essential information. We aim to further expand this program to more laboratories to provide a robust estimate of the quality of KRAS testing in Europe, and provide the basis for remedial measures and harmonization.
Atinga, Roger A; Abiiro, Gilbert Abotisem; Kuganab-Lem, Robert Bella
2015-03-01
To identify the factors influencing dropout from Ghana's health insurance scheme among populations living in slum communities. Cross-sectional data were collected from residents of 22 slums in the Accra Metropolitan Assembly. Cluster and systematic random sampling techniques were used to select and interview 600 individuals who had dropped out from the scheme 6 months prior to the study. Descriptive statistics and multivariate logistic regression models were computed to account for sample characteristics and reasons associated with the decision to dropout. The proportion of dropouts in the sample increased from the range of 6.8% in 2008 to 34.8% in 2012. Non-affordability of premium was the predominant reason followed by rare illness episodes, limited benefits of the scheme and poor service quality. Low-income earners and those with low education were significantly more likely to report premium non-affordability. Rare illness was a common reason among younger respondents, informal sector workers and respondents with higher education. All subgroups of age, education, occupation and income reported nominal benefits of the scheme as a reason for dropout. Interventions targeted at removing bottlenecks to health insurance enrolment are salient to maximising the size of the insurance pool. Strengthening service quality and extending the premium exemption to cover low-income families in slum communities is a valuable strategy to achieve universal health coverage. © 2014 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, A. V.; Gupta, A.; Althammer, M.
We investigate the switching characteristics in BaTiO{sub 3}-based ferroelectric tunnel junctions patterned in a capacitive geometry with circular Ru top electrode with diameters ranging from ∼430 to 2300 nm. Two different patterning schemes, viz., lift-off and ion-milling, have been employed to examine the variations in the ferroelectric polarization, switching, and tunnel electro-resistance resulting from differences in the pattering processes. The values of polarization switching field are measured and compared for junctions of different diameter in the samples fabricated using both patterning schemes. We do not find any specific dependence of polarization switching bias on the size of junctions in both samplemore » stacks. The junctions in the ion-milled sample show up to three orders of resistance change by polarization switching and the polarization retention is found to improve with increasing junction diameter. However, similar switching is absent in the lift-off sample, highlighting the effect of patterning scheme on the polarization retention.« less
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.